YouTubers Add Snap to Growing Wave of Copyright Suits Over AI Training
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

Major music publishers sue Anthropic, seek $3B+ over alleged mass copyright copying
A coalition led by Concord and Universal alleges Anthropic copied and used more than 20,000 copyrighted musical works to train its Claude models and is seeking in excess of $3 billion, relying in part on discovery from prior litigation to show patterns of bulk acquisition. The filing is part of a broader wave of creator and publisher suits testing how AI builders source training data and could force licensing, provenance controls, or injunctive limits on dataset procurement.

OpenAI faces copyright and trademark suit from Encyclopaedia Britannica
Encyclopaedia Britannica sued OpenAI claiming the company ingested proprietary encyclopedia text during model training and that outputs sometimes repeat or misattribute that material; the complaint seeks injunctive relief and trademark remedies. The filing comes amid a broader wave of litigation—including multi‑billion‑dollar demands and a reported $1.5 billion authors’ settlement—that is forcing publishers, archivists and model builders to reassess data sourcing, provenance and licensing practices.

Anthropic Settlement and Landmark Rulings Force AI Labs to Rework Training Data
Anthropic agreed to a $1.5 billion settlement after courts scrutinized how large language models handle copyrighted material, and parallel lawsuits by music publishers and creators broaden the exposure—pushing AI firms to reassess training-data provenance, licensing and acquisition channels.

Court Papers Reveal Anthropic Bought, Scanned and Destroyed Millions of Books to Train Its AI — And Tried to Keep It Quiet
Newly unsealed court documents show Anthropic acquired and digitized vast numbers of used books to refine its Claude models, then destroyed the physical copies. The disclosures sit alongside separate, expanding litigation and publisher actions — including a multi‑billion music‑publishing complaint and publisher blocks on the Internet Archive — that together signal a widening backlash over how training data is sourced.

European publishers lodge antitrust complaint over Google's AI summaries
The European Publishers Council has filed an antitrust complaint with EU authorities alleging that Google's AI-generated summary feature uses publishers' content without consent or fair payment, broadening a regulatory review that now intersects with EU Digital Markets Act demands and parallel publisher tactics like opt-outs and archive blocking. The move increases pressure on regulators to consider structural or conduct remedies that could force licensing, product redesigns, or technical opt-outs for publishers.

xAI Loses Bid to Block California Training-data Disclosure Law
A federal judge denied xAI’s request to pause California’s AB 2013, forcing the firm to disclose model-training provenance while its lawsuit proceeds. The ruling arrives amid broader industry litigation and discovery (including multi‑billion‑dollar claims and recent disclosures about bulk acquisition channels) that help explain why legislators and regulators are pressing for auditable provenance.

Studios Move to Block Seedance as Hyper‑Real AI Clips Spread
Major U.S. studios have demanded that ByteDance halt public use of Seedance 2.0 after the tool produced photorealistic short videos that replicate recognisable performers and copyrighted scenes. The episode exposes wider platform and moderation strains as cheap generative tools flood feeds, intensifying calls for provenance, clearer disclosure and cross‑platform standards.

UK Government Delays AI Copyright Bill After Creators Push Back
The UK Government has postponed passage of proposed rules allowing models to be trained on copyrighted works, removing the bill from the King's Speech timetable. The pause elevates a licensing-first pathway, intensifies industry lobbying, and creates regulatory uncertainty for Google and OpenAI as creators demand transparency and remuneration.