
Major music publishers sue Anthropic, seek $3B+ over alleged mass copyright copying
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you
YouTubers Add Snap to Growing Wave of Copyright Suits Over AI Training
A coalition of YouTube creators has filed a proposed class action accusing Snap of using their videos to train AI features without permission, alleging the company relied on research-only video-language datasets and sidestepped platform restrictions. The case seeks statutory damages and an injunction and joins a string of recent suits that collectively threaten how firms source audiovisual training material for commercial AI products.

Court Papers Reveal Anthropic Bought, Scanned and Destroyed Millions of Books to Train Its AI — And Tried to Keep It Quiet
Newly unsealed court documents show Anthropic acquired and digitized vast numbers of used books to refine its Claude models, then destroyed the physical copies. The disclosures sit alongside separate, expanding litigation and publisher actions — including a multi‑billion music‑publishing complaint and publisher blocks on the Internet Archive — that together signal a widening backlash over how training data is sourced.

Anthropic Settlement and Landmark Rulings Force AI Labs to Rework Training Data
Anthropic agreed to a $1.5 billion settlement after courts scrutinized how large language models handle copyrighted material, and parallel lawsuits by music publishers and creators broaden the exposure—pushing AI firms to reassess training-data provenance, licensing and acquisition channels.

OpenAI faces copyright and trademark suit from Encyclopaedia Britannica
Encyclopaedia Britannica sued OpenAI claiming the company ingested proprietary encyclopedia text during model training and that outputs sometimes repeat or misattribute that material; the complaint seeks injunctive relief and trademark remedies. The filing comes amid a broader wave of litigation—including multi‑billion‑dollar demands and a reported $1.5 billion authors’ settlement—that is forcing publishers, archivists and model builders to reassess data sourcing, provenance and licensing practices.

European publishers lodge antitrust complaint over Google's AI summaries
The European Publishers Council has filed an antitrust complaint with EU authorities alleging that Google's AI-generated summary feature uses publishers' content without consent or fair payment, broadening a regulatory review that now intersects with EU Digital Markets Act demands and parallel publisher tactics like opt-outs and archive blocking. The move increases pressure on regulators to consider structural or conduct remedies that could force licensing, product redesigns, or technical opt-outs for publishers.

Anthropic clashes with Pentagon over Claude use as $200M contract teeters
Anthropic is resisting Defense Department demands to broaden operational access to its Claude models, putting a roughly $200 million award at risk. The standoff — rooted in concerns about autonomous weapons, mass‑surveillance use-cases, and provenance/auditability inside classified networks — could set procurement and governance precedents across major AI vendors.

Anthropic Accuses DeepSeek, MiniMax and Moonshot of Distillation Mining of Claude
Anthropic alleges three mainland-China labs used over 24,000 fake accounts to record roughly 16 million exchanges from its Claude model to perform large-scale distillation; OpenAI and other industry disclosures show similar extraction tactics but have not independently verified Anthropic’s full counts, deepening policy and legal debates over export controls, telemetry, and model-protection measures.
Publishers Restrict Internet Archive Access as AI Scraping Risks Rise
Several major news organizations are blocking the Internet Archive’s crawlers amid worries that AI companies could use the Archive as a conduit to collect paywalled journalism. The change intensifies legal and commercial conflicts over training data and raises short-term risks to public access and long-term questions about how journalistic content will be governed for AI use.