logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

US Court Declares Training AI Models on Books Without Author Permission is "Fair Use"

An eerie precedent for writers and artists.

In a major blow to artists and writers, who for the past couple of years have been up in arms over AI companies using their works without permission to train LLMs, a United States court has ruled that training AI models on legally acquired books constitutes fair use, setting a worrisome precedent by virtue of being the first such ruling.

StudioCanal

In the case against Claude developer Anthropic – filed last year by Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who alleged the company trained its Claude AI models on pirated material, Judge William Alsup of the U.S. District Court for the Northern District of California declared that training AI on written works is "transformative" enough to fall under fair use. He also held that buying physical books solely to digitize them for training LLMs and discard them afterward was well within Anthropic’s legal rights.

"Authors contend generically that training LLMs will result in an explosion of works competing with their works – such as by creating alternative summaries of factual events, alternative examples of compelling writing about fictional events, and so on," Alsup's decision reads. "This order assumes that is so. But Authors' complaint is no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works. This is not the kind of competitive or creative displacement that concerns the Copyright Act. The Act seeks to advance original works of authorship, not to protect authors against competition."

As a silver lining, the judge did side with the authors on the topic of piracy, ruling that pirating books and adding them to the training library does not constitute fair use, even if such books were not used for AI training.

A separate trial on the pirated copies used to create Anthropic's central library is set to be held at a later date, and if found guilty of willful infringement, a court may award up to $150,000 per work infringed, amounting – given Antrhopic has pirated over seven million copies of books for its training library – to over one trillion USD.

While an AI firm potentially facing a fine so devastating, it will most definitely bankrupt them, the ruling itself still leaves a sour taste in the mouth, considering the power that precedents have in the US judicial system.

If one judge decides that AI models have the same rights as humans to absorb information without the author's explicit consent and then produce different content based on it, who's to say that, for instance, the judge presiding over the recent Disney vs. Midjourney case won't use that precedent to declare text-to-image AIs are "spectacularly transformative" as well, even if in reality they are not?

It seems to me that the true silver lining in this ordeal is that, three years after the start of the AI boom, the controversial technology is finally leaving the legal gray area it has occupied all this time, meaning that it is still possible – though not certain, as Alsup's ruling shows – that the spread and development of generative AI could be halted in courts. Fingers crossed.

Don't forget to join our 80 Level Talent platform and our new Discord server, follow us on InstagramTwitterLinkedInTelegramTikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more