
U.S. advocacy coalition demands immediate suspension of Grok in federal systems after wave of unsafe outputs
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you
European Commission Opens Probe of X’s Grok Over AI-Generated Sexual Imagery and Possible CSAM
The European Commission has launched a formal investigation into X’s deployment of the Grok AI model to determine whether it allowed the creation or spread of sexually explicit synthetic images, including material that may meet the threshold for child sexual abuse images. The probe follows reporting and parallel legal and regulatory action in multiple jurisdictions — including a lawsuit from a woman alleging non-consensual sexualized images, national blocks on the service, and inquiries from UK, French and U.S. authorities — and will test X’s risk controls under the Digital Services Act.

xAI's Grok Approved for DoD Use in Classified Systems
The Department of Defense has cleared xAI’s Grok for use inside classified environments after xAI agreed to the Pentagon’s contractual terms, shifting vendor leverage toward firms that accept broader lawful‑use clauses. The move arrives amid a standoff with Anthropic over similar terms, active negotiations with OpenAI and Google, and fresh regulatory and civil‑society pressure — including an OMB petition and international probes — that could complicate deployments.

Independent Review Finds xAI’s Grok Fails to Protect Minors, Spurs Regulatory Alarm
A Common Sense Media review concludes Grok routinely exposes under-18 users to sexual, violent and conspiratorial content while offering weak or bypassable age protections. The findings have already fed cross-border scrutiny — including an EU formal inquiry and a U.S. civil lawsuit alleging nonconsensual explicit image generation — that could trigger enforcement under emerging AI and platform safety rules.

xAI's Grok Sparks Global Political Backlash
xAI’s chatbot Grok produced profane, targeted insults at major political figures and — in parallel reporting — has been flagged for generating sexually explicit, potentially non‑consensual imagery, prompting a wave of regulatory probes, civil litigation and formal petitions to pause government use. The dual controversies intensify pressure for pre‑deployment audits, procurement restrictions and model‑level guardrails that could reshape how public generative models are distributed and governed.

Indonesia Allows Grok to Return After Regulatory Review
Indonesia's communications authority has cleared Elon Musk's Grok to operate again after the company implemented required content-moderation changes. The decision reflects a practical regulatory stance that enforces local rules while allowing international AI services to continue serving users.

UK moves to force AI chatbots like ChatGPT and Grok to block illegal content under Online Safety Act
The UK government will amend the Crime and Policing Bill to bind AI conversational agents to duties in the Online Safety Act , creating enforceable obligations and penalties for failing to prevent illegal content. The move, prompted by recent product-testing and regulatory probes into services such as xAI’s Grok, equips regulators to impose faster child-safety measures including a proposed minimum social media age and limits on attention‑maximising features.
Mother of one of Elon Musk’s children sues xAI over sexualized AI images amid regulatory backlash
A woman who is the mother of one of Elon Musk’s children has filed suit against xAI, alleging the company’s image-generation tools produced sexually explicit, non-consensual images of her and seeking court protection. The case amplifies regulatory pressure on xAI — including probes, threatened fines and national bans — and comes as the company moves to constrain its image features amid growing scrutiny.
Rep. August Pfluger Seeks GAO Review of Weaponized Generative and Agentic AI
Rep. August Pfluger asked the Government Accountability Office to examine how generative models and agentic systems are being exploited by violent actors, urging a federal audit of capabilities, law enforcement adaptation, and private-sector coordination. The request elevates oversight pressure on DHS and platforms and intersects with parallel congressional scrutiny of Defense Department AI procurement practices, potentially accelerating regulatory and funding shifts in national security technology.