Senators demand answers from Meta over delay in default-private settings for teen accounts
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

Meta Faces High-Stakes Trials Over Alleged Failures to Protect Children
Meta is defending separate, high‑profile proceedings in New Mexico and California that together probe whether product design choices across Facebook and Instagram exposed minors to predation and addictive use patterns. Plaintiffs plan to rely on thousands of internal documents and behavioral‑science experts while a bipartisan group of U.S. senators is pressing Meta for records after filings suggested safety changes were discussed earlier than their implementation.

Meta, Apple in Court Over Child‑Safety and Encryption Choices
Separate state suits and a bellwether Los Angeles trial are using internal documents and executive testimony to challenge how product design and encryption choices affect child safety; lawmakers and international regulators are watching as outcomes could force technical remedies, new disclosure duties, or national policy responses.

Meta accelerates in‑house AI for moderation, cutting reliance on contractors
Meta is shifting content moderation work from external contractors to proprietary AI systems in a staged, multi‑year rollout while simultaneously ramping AI capital spending and piloting paid AI features. The consolidation speeds iteration and signal capture for safety models but collides with concurrent privacy initiatives and high‑profile litigation, magnifying regulatory, data and operational risks.

Ofcom Demands Tighter Age Verification from Major Social Platforms
UK regulators Ofcom and the ICO have pressed major social platforms to deploy robust age‑verification measures to block under‑13 registrations, citing high self‑reported child account prevalence and very large suspected‑underage removal figures; firms now face immediate choices between third‑party/device attestations and deeper product redesigns that reshape onboarding and recommendation exposure. The push amplifies privacy, security and market‑structure tensions — from vendor data retention and a recent identity‑image breach to divergent regulatory tools and platform promises about biometric ephemerality.

Spain orders prosecutors to probe X, Meta and TikTok over AI-generated child sexual abuse material
Spain has instructed prosecutors to open criminal inquiries into X, Meta and TikTok over alleged AI-generated child sexual abuse material, part of a wider push that includes a proposed minimum social‑media age of 16. The step comes amid parallel EU and national scrutiny of generative‑AI features — notably a formal Brussels inquiry into X’s Grok and recent French judicial actions — signaling growing cross‑border legal pressure on platforms.

Public pressure is forcing tech platforms toward stronger protections for children
Public and political pressure across Europe, parts of the US, and other democracies is pushing social platforms to rethink how products interact with minors, prompting proposals from parental-consent frameworks to explicit age gates. Technical, legal and behavioural hurdles — from verification limits to circumvention and privacy risks — mean the result will be a fragmented set of rules, experiments and litigation rather than a single global solution.

Meta Faces Privacy Lawsuit Over Ray‑Ban Smart Glasses
Meta is being sued in a U.S. class action alleging undisclosed human review of footage from Ray‑Ban smart glasses, challenging the product’s advertised privacy protections. Independent reporting that contractors sometimes label intimate clips and separate reporting that Meta is internally weighing face‑identification features ("Name Tag") together raise the stakes for regulators, investors and enterprise partners evaluating biometric identification risks in wearables.

Australia Rebukes Major Tech Firms Over Failures to Curb Child Sexual Abuse Material
Australia’s government publicly condemned large technology platforms for failing to stop the spread of child sexual abuse content, pressing for faster detection, clearer reporting and stronger enforcement. Officials signalled tougher oversight and potential regulatory steps that would force platforms to change moderation practices and cooperation with law enforcement.