Matthew Butterick seems like a very normal guy. He wears a baseball cap, clear-rimmed glasses, and a colorful sports jacket. Behind him are two vintage keyboards and synthesizers that add a bohemian touch to the basement of his Los Angeles home, which is also his office. “I have a collection of more than twenty,” he would later say during a video call with EL PAÍS. Nothing in this scene suggests that Butterick is a lawyer. Even less so than someone so far removed from the classic suit-and-tie stereotype, giants like Microsoft, OpenAI and Meta are holding their breath.
The American has launched a veritable legal crusade against generative artificial intelligence (AI). In 2022, it registered the first lawsuit in the history of this field against Microsoft, one of the companies that develops this type of tools (GitHub Copilot). He currently coordinates four class actions (class actions) which bring together lawsuits brought by programmers, artists and writers and which, if successful, could force the companies responsible for applications such as ChatGPT or Midjourney to compensate thousands of authors. Or, they may have to abandon their algorithms and retrain them with databases that do not infringe on intellectual property rights. “This is, for many of us, the fight of our lives,” he says. The first results of their efforts could arrive in a few months.
The newspaper The New York Times A few days ago, he took the same path as Butterick and sued OpenAI and Microsoft for using millions of articles from the journal without their consent to train their algorithms. It is the first media to take this measure. “I can’t comment on the case because I haven’t read the lawsuit,” he said seriously. “We were the first to sue Meta and OpenAI for training language models with copyrighted material. We are not surprised that others subsequently did so. My partner Joe Saveri and I have always viewed our cases and other litigation as part of an emerging global conversation about how generative AI will coexist with human creativity. This race has just begun,” he adds.
2023 was the year the world discovered the potential of generative AI, capable of producing seemingly original texts, images or music. This last nuance is important: the algorithms that allow this are applied to gigantic databases made up of billions of documents, whether texts, illustrations or pieces of music. All of these works, without which automatic systems would be completely useless, have behind them an author who not only is not paid for the use of his work, but who risks finding himself unemployed as AI tools generative are becoming more sophisticated.
Butterick identified this danger in the summer of 2022, months before ChatGPT emerged. Born in Ann Arbor, Michigan, the 53-year-old American has made his living primarily as a type designer, programmer and writer. “Like many other creators and artists, it has become clear to me that my work is doomed to failure. It is now part of the training data of many generative AI systems. The next step is to get rid of us,” he says.
The first product that put Butterick on alert was Microsoft’s GitHub Copilot, an AI-powered software tool that trains with a host of open source software. Its launch sowed doubt within the programming community, he recalls. The difference between Butterick and the rest of those involved is that he decided to take action on this issue. To the point of dusting off the law degree he obtained at the University of California at Los Angeles (UCLA) 15 years ago.
“After speaking with those involved, I have concluded that this system constitutes a violation of open source licenses and is not a harmless tool. It is designed to replace open source programmers, and I expressed this on my blog,” he emphasizes. “Joseph Saveri, a lawyer I know and a fan of my typography work, contacted me and said, ‘You know, the point you’re making about GitHub Copilot is pretty interesting.’ At the time, I was not a practicing attorney, so Joe and I launched an investigation and became convinced that there really was a case.
In November 2022, Butterick and Saveri filed a lawsuit in the Northern District of California against Microsoft, owner of GitHub Copilot, alleging that it violated open licensing agreements. This was the first litigation involving generative AI.
But programmers weren’t the only group whose jobs were threatened. After filing a lawsuit, a group of visual artists approached the lawyer couple. “They said, wow, that sounds like the problem we have. Would you be interested in taking on our case? This is how the process was designed that they opened in January 2023 against Stability AI (developers of Stable Diffussion), Midjourney and Deviant Art, the main generative AI tools applied to illustration. In November, they filed the amendments requested by the judge. The process continues, just like Copilot.
The third group represented by Butterick and Saveri is that of book authors. In July, they filed two class-action lawsuits against OpenAI and Meta for including books written by plaintiffs including Richard Kadri, Sarah Silverman, and Christopher Golden in their training dataset.
A very real threat
Illustrator Karla Ortiz realized the tsunami that would hit her and her professional colleagues in the summer of 2022. This 38-year-old Puerto Rican can be considered a successful professional. He has worked for most major Hollywood film studios, including Marvel Studios, HBO and Universal Pictures, as well as video game production companies such as Blizzard and Ubisoft. From his brushes were born key characters in blockbusters such as Thor: Ragnarok, Doctor Strange either Jurassic world. But even someone from his hideout doesn’t feel safe.
Ortiz began researching generative AI tools applied to illustration and quickly recognized his colleagues’ traits in the drawings produced by the tool. “I was horrified to see that these platforms are using your name, so that people can demand your style and use your work to generate images that look like yours,” he explains via video call from his studio in San Francisco. “At that point, I started to worry a lot. I am a board member of the Conceptual Art Association of America, which brings together artists working in the film and video game industries. “We decided to mobilize.”
Ortiz and two other colleagues became plaintiffs in the class action lawsuit filed in January of this year by Butterick and Saveri against Stability AI, Midjourney and DeviantArt. Her cause gained momentum when in July she was called to testify before the US Senate Subcommittee on Intellectual Property to talk about the legality and ethics of AI. “This would have been inconceivable a year earlier. Senators take seriously the fact that creators have their works taken away without their consent, without compensation and without credit,” says Butterick, who accompanied his client to the Capitol.
“For illustrators, a traditionally important source of income is turning ideas from producers and directors into images to show to studios. This activity has literally been erased by generative AI,” he says. “My job is in danger. It’s about showing ideas, and machines now do that very well. We artists can’t compete with these tools. “Until now, I had never worried about the future of my career.”
AI comes to court
The momentum generated by the class action lawsuits filed by Butterick and his colleague paved the way for other lawsuits. Earlier this year, Getty Images sued Stability AI for using images from its archives without permission. In September, two other groups of writers filed a lawsuit against OpenAI. Bestsellers George RR Martin, John Grisham and Jonathan Franzen are among them. In October, several record labels, including Universal Music Group, sued Anthropic, a company created by former OpenAI workers, for training its algorithms with copyrighted material. Hollywood actors’ unions have not filed a complaint, but they have been on strike for months to improve their pay conditions and obtain guarantees protecting them against artificial intelligence. The New York Times just sued OpenAI and Microsoft for using millions of journal articles in ChatGPT training.
Butterick and Saveri know it’s now or never. This wave of lawsuits essentially claims that generative AI is illegal. When this technology is fully implemented, it will be more difficult to oppose the companies developing it. But as courts conclude that algorithm training is illegal, the blow to big tech can be capitalized. We would have to start again and redo the databases. It would also be disastrous for them to have to license these databases, negotiating payments in exchange for permissions for every source they drank from.
Is it too optimistic to think about this result? “It’s happened before,” Butterick replies with a smile. “The FTC (the American regulator) investigated companies that used models based on private data and made them delete their databases as well as the algorithms and models built with them,” he explains. Cambridge Analytica, the consulting firm that used the data of more than 80 million Facebook users to influence the 2016 presidential election, was the first company to be subject to the FTC’s new policy in 2019, dubbed “algorithmic destruction”.
In the EU, the European regulation on artificial intelligence, the final text of which is not yet public, but for which there is already political agreement, establishes the obligation for founding models to comply with community regulations in copyright matters. We will have to wait to see the fine print of the regulations to see how it will be implemented.
Butterick suspended writing, design and programming for almost a year to focus on the litigation he opened. He does it because he believes it is the right thing to do, but not just for that reason. “I firmly believe that if I do not stop my usual work and join this business, I will have nothing left. When we presented the first trial, that of Copilot, people looked at us as if we were crazy or Luddites,” underlines the American. “Barely a year has passed and no one doubts the tremendous impact of generative AI on creative professions. It has only just begun. “We need to put safeguards in place so that this technology doesn’t end everything. »
Subscribe to continue reading
Read without limits