
Australia Rebukes Major Tech Firms Over Failures to Curb Child Sexual Abuse Material
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

Spain orders prosecutors to probe X, Meta and TikTok over AI-generated child sexual abuse material
Spain has instructed prosecutors to open criminal inquiries into X, Meta and TikTok over alleged AI-generated child sexual abuse material, part of a wider push that includes a proposed minimum social‑media age of 16. The step comes amid parallel EU and national scrutiny of generative‑AI features — notably a formal Brussels inquiry into X’s Grok and recent French judicial actions — signaling growing cross‑border legal pressure on platforms.
Australia's eSafety Regulator Moves to Force Age Checks on Chatbots
Australia's eSafety regulator is threatening app stores and search engines with enforcement unless AI chat services adopt robust age verification by March 9 . The move comes amid a broader international trend — including pending measures in other jurisdictions to graft chatbot duties onto online‑safety laws — and points to faster, distribution‑level intervention that could raise compliance costs, fragmentation and privacy trade‑offs, with penalties up to A$49.5 million .

Australian minister challenges Roblox's PG rating amid child safety concerns
Australia's communications minister has formally asked Roblox to explain how it protects children and requested government testing of the platform's safeguards while urging a review of its PG classification. The move reflects a broader Australian push to convert public criticism of platforms into enforceable oversight and could lead to technical mandates or regulatory sanctions if protections are judged insufficient.

Public pressure is forcing tech platforms toward stronger protections for children
Public and political pressure across Europe, parts of the US, and other democracies is pushing social platforms to rethink how products interact with minors, prompting proposals from parental-consent frameworks to explicit age gates. Technical, legal and behavioural hurdles — from verification limits to circumvention and privacy risks — mean the result will be a fragmented set of rules, experiments and litigation rather than a single global solution.

Meta Faces High-Stakes Trials Over Alleged Failures to Protect Children
Meta is defending separate, high‑profile proceedings in New Mexico and California that together probe whether product design choices across Facebook and Instagram exposed minors to predation and addictive use patterns. Plaintiffs plan to rely on thousands of internal documents and behavioral‑science experts while a bipartisan group of U.S. senators is pressing Meta for records after filings suggested safety changes were discussed earlier than their implementation.

Meta, Apple in Court Over Child‑Safety and Encryption Choices
Separate state suits and a bellwether Los Angeles trial are using internal documents and executive testimony to challenge how product design and encryption choices affect child safety; lawmakers and international regulators are watching as outcomes could force technical remedies, new disclosure duties, or national policy responses.

Ofcom Demands Tighter Age Verification from Major Social Platforms
UK regulators Ofcom and the ICO have pressed major social platforms to deploy robust age‑verification measures to block under‑13 registrations, citing high self‑reported child account prevalence and very large suspected‑underage removal figures; firms now face immediate choices between third‑party/device attestations and deeper product redesigns that reshape onboarding and recommendation exposure. The push amplifies privacy, security and market‑structure tensions — from vendor data retention and a recent identity‑image breach to divergent regulatory tools and platform promises about biometric ephemerality.
Australia’s eSafety Commissioner steers a high-stakes social media experiment
Julie Inman Grant has become the public face and enforcement engine behind Australia’s controversial under-16 social media ban, balancing legal fights, platform resistance and intense personal abuse. Her office is implementing the law across major services while preparing for court challenges and shifting attention to regulation of AI and platform design.