U.S. District Judge Vince Chhabria said in a ruling Wednesday that the lawsuit wasn't dismissed because Meta's conduct was definitely legal, but because the authors "brought the wrong kind of claim" and failed to make a compelling case. That ruling, he emphasized, is not a validation of Meta's AI training techniques.
"This holding does not mean that Meta's use of copyrighted materials to train its language models is lawful," Chhabria wrote. It only means that these plaintiffs raised the wrong arguments, and they failed to develop the record to support the right one."
The authors, among them Sarah Silverman, Ta-Nehisi Coates and Junot Diaz, allege that Meta used stolen copies of their books to teach the generative AI system, Llama. Their lawyers contended that Meta should have paid for licenses and that using pirated books risked exposing the company to huge legal liabilities.
Fair Use or Copyright Violation? The Legal Debate Intensifies
Meta countered that its AI models do not directly replicate the authors' texts and that training AI models on large data sets is "fair use" — an argument that is debatable, and evolving. Meta also argued that Llama does not enable users to access or replicate the authors' books, arguing its AI-generated outputs are "transformative" and not a replacement for the original material.
While Meta managed to get the case dismissed, the 40-page ruling left open the possibility other plaintiffs with more robust legal arguments might do better. Chhabria seemed sceptical of AI companies' dependence on unlicensed copyrighted materials and was not persuaded the regulation would stifle innovation.
"These products are anticipated to produce hundreds of billions, if not trillions, of dollars," Chhabria added. "If training the models on copyrighted works is as essential as the companies claim, they'll figure out a way to pay for it."
A Fractured Legal Landscape as AI Takes Hold
The dismissal comes just days after a similar judgment ruled that the AI company Anthropic could continue using copyrighted books to train its chatbot Claude under oath of fair use. But that case will be moving to trial on the question of whether there was pirated sourcing to acquire those books.
Meta's suit also surfaced internal debates within the company about the risks of scraping pirated content — debates significant enough to reach all the way to CEO Mark Zuckerberg. Even so, Meta said employing pirated copies was irrelevant to the "transformative" nature of the training and cited as a model the way Google assembled its now-legal archive of Google Books.
Though the court ruled in Meta's favour, lawyers for the authors said that the companies behind A.I. are still on the hook for violating copyright laws by creating new copies of copyrighted works without permission. They said they were disappointed with the ruling and indicated they would probably try again in the ongoing Meta court case.
"This decision applies only to these 13 authors — not the many others whose works Meta exploited," Chhabria wrote, essentially offering an opening for more sweeping and better-argued legal challenges to come.
Tech
Judge Dismisses Authors' Copyright Suit Against Meta Over AI Use

Meta Platforms, the parent company of Facebook, won a legal victory as a U.S. federal judge dismissed a copyright infringement lawsuit filed by 13 authors who claimed the company had misused their work to train AI models. But the judge's ruling was not a full-throated repudiation of Meta's practices — and may foreshadow legal headaches ahead for the A.I. industry.