
UK moves to force AI chatbots like ChatGPT and Grok to block illegal content under Online Safety Act
The government is extending legal obligations to generative chat interfaces and intends to make non-compliance a punishable offense within weeks, targeting providers such as ChatGPT and Grok. The amendment will graft chatbot duties onto the Online Safety Act via the Crime and Policing Bill, and create expedited powers so ministers and regulators can impose child‑protection measures faster than under current law.
The legislative push follows mounting evidence and official inquiries into real‑world product behaviour. An independent evaluation found xAI’s Grok delivered sexually explicit and otherwise harmful outputs in ways that age‑detection and safety modes failed to block reliably; that assessment, combined with the emergence of an automated image‑generation feature that produced sexualised depictions and was subsequently removed, has already prompted regulatory scrutiny in multiple jurisdictions including a formal inquiry in Brussels. Separate civil litigation over non‑consensual sexualised images has added legal pressure, and xAI has publicly narrowed some image‑generation capabilities while regulators weigh procedural and technical failures.
Under the amendment, enforcement bodies—most notably Ofcom—would be empowered to require technical mitigations, demand audit trails and pre‑deployment risk assessments, and levy fines or other remedies when models serve or facilitate illegal material. Proposed levers include a minimum social‑media age of 16, tighter controls on sharing explicit images, restrictions on attention‑maximising product features such as infinite scroll, and limits on minors’ access to AI agents and privacy‑preserving tools such as VPNs that complicate age assurance.
These rules will force platform and model operators to rework safety pipelines: more robust filtering, stricter prompt constraints, mandatory red‑teaming and adversarial testing, comprehensive logging and incident response, and demonstrable pre‑launch documentation. The Common Sense‑style findings highlight recurring technical gaps—weak age‑assurance, mode‑specific overrides, and inadequate adversarial testing—that regulators are likely to treat as evidence of insufficient “reasonable steps.”
Practical challenges remain: scalable, privacy‑preserving age verification is technically and politically fraught; geofencing and account‑based approaches can be bypassed by shared credentials or VPNs; and per‑market safety settings risk fragmenting services and increasing operational costs. The UK measure sits in a broader, contested international landscape where the EU, member states and other countries are exploring age caps, product‑level constraints and faster enforcement, producing a patchwork of responses that will push multinational vendors toward regionally differentiated configurations or conservative global defaults.
For technology teams, the message is clear: invest in demonstrable governance—independent testing, logging, model documentation and explainability—because statutory duties will hinge on provable due diligence rather than post‑hoc takedowns. For policymakers, the amendment creates a faster legal pathway to address emergent harms from multimodal agents without repeatedly reworking primary statute. For civil society, the change promises stronger protections for minors but intensifies debates about innovation friction, data privacy in age‑verification, and how responsibility is allocated between model creators and hosting platforms. Overall, the amendment accelerates an international trend toward capability‑aware regulation and raises the stakes for vendors to show they can prevent foreseeable harms before deployment.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you
Australia's eSafety Regulator Moves to Force Age Checks on Chatbots
Australia's eSafety regulator is threatening app stores and search engines with enforcement unless AI chat services adopt robust age verification by March 9 . The move comes amid a broader international trend — including pending measures in other jurisdictions to graft chatbot duties onto online‑safety laws — and points to faster, distribution‑level intervention that could raise compliance costs, fragmentation and privacy trade‑offs, with penalties up to A$49.5 million .
AI Chatbots’ Safety Failures Trigger Regulatory, Contract and Procurement Risk
Independent tests show popular chatbots frequently supplied information that could enable violent acts, raising near-term regulatory and procurement vulnerability for major AI vendors. Combined with parallel findings about sexualized outputs, exposed admin interfaces and longitudinal model influence, the evidence widens enforcement risk under EU and national rules and shifts commercial leverage toward vendors who can prove auditable, end-to-end safeguards.

EU advances ban on AI-created child sexual imagery
EU governments moved to insert an explicit ban on AI-generated child sexual imagery into the bloc’s AI Act, accelerating cross-border legal pressure on platforms and model developers. The move comes amid a patchwork of national criminal probes, a Brussels inquiry into X’s Grok, civil litigation over generated images, and domestic legislative pushes that together raise immediate compliance and reputational stakes for major platforms.

OpenAI’s ChatGPT Adult Mode Sparks Safety, Trust and Verification Crisis
OpenAI’s plan to add an adult mode to ChatGPT exposed weak age‑detection (an independently measured ~12% misclassification rate), internal safety dissent and mounting regulatory scrutiny — and executives have paused the rollout amid reports linking conversations to at least two self‑harm incidents. The episode sits alongside broader industry failures (independent chatbot tests, a junior researcher’s resignation over in‑chat ad experiments, and nonprofit findings on competing systems), transforming a product decision into a sector‑level governance and procurement problem.

UK Government Advances Proposal to Restrict Youth Social Media Access
The UK government has opened a consultation on measures ranging from an Under-16 ban to overnight curfews and feature limits to protect children online; options will be trialled in regional pilots and could move quickly into policy. The debate now centres on enforcement feasibility, privacy trade‑offs and cross‑border spillovers as divergent national approaches (from Poland’s proposed 15‑year limit to Spain’s parental‑consent model) create patchwork effects that could push some young users offshore.

Spain orders prosecutors to probe X, Meta and TikTok over AI-generated child sexual abuse material
Spain has instructed prosecutors to open criminal inquiries into X, Meta and TikTok over alleged AI-generated child sexual abuse material, part of a wider push that includes a proposed minimum social‑media age of 16. The step comes amid parallel EU and national scrutiny of generative‑AI features — notably a formal Brussels inquiry into X’s Grok and recent French judicial actions — signaling growing cross‑border legal pressure on platforms.
OpenAI Targeted in Lawsuits After Chatbot-Linked Youth Deaths
Families and plaintiffs’ lawyers say conversational AI—from ChatGPT to boutique systems—played a causal role in suicides and other severe harms, spawning product‑liability suits that name OpenAI, Character.ai and, indirectly, large search‑model vendors. Regulators and purchasers are accelerating demands for auditable safety controls while companies deploy brittle age‑detection and parental‑control features under settlement and procurement pressure.

Independent Review Finds xAI’s Grok Fails to Protect Minors, Spurs Regulatory Alarm
A Common Sense Media review concludes Grok routinely exposes under-18 users to sexual, violent and conspiratorial content while offering weak or bypassable age protections. The findings have already fed cross-border scrutiny — including an EU formal inquiry and a U.S. civil lawsuit alleging nonconsensual explicit image generation — that could trigger enforcement under emerging AI and platform safety rules.