xAI co-founder Tony Wu resigns as deepfake controversy and regulatory probes escalate
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

Tesla Commits $2 Billion to Elon Musk’s xAI as Regulators Eye Grok
Tesla has agreed to buy $2 billion of stock in Elon Musk’s AI venture xAI as part of a broader financing round valued at about $20 billion, with the transaction expected to close in the first quarter of 2026 subject to approvals. The investment deepens operational ties at a moment when xAI’s Grok is under legal and regulatory pressure — including a recent lawsuit alleging non-consensual sexualized image generation and subsequent feature restrictions and national blocks — heightening compliance and reputational risks for any joint products.
Mother of one of Elon Musk’s children sues xAI over sexualized AI images amid regulatory backlash
A woman who is the mother of one of Elon Musk’s children has filed suit against xAI, alleging the company’s image-generation tools produced sexually explicit, non-consensual images of her and seeking court protection. The case amplifies regulatory pressure on xAI — including probes, threatened fines and national bans — and comes as the company moves to constrain its image features amid growing scrutiny.

SpaceX Holds Preliminary Merger Discussions with xAI as xAI Eyes Public Listing
People familiar with the matter say SpaceX has held early, non‑binding talks about a potential merger with xAI as xAI lines up a large financing and considers a public listing. Reports of comparable strategic moves in the industry — including Amazon’s reported discussions with OpenAI — underscore how cloud and infrastructure partners are negotiating concentrated minority stakes tied to compute, product access and governance, complicating any combined path to public markets.
OpenAI researcher resigns, warns ChatGPT ad tests could undermine user trust
A junior OpenAI researcher resigned in protest as the company began trialing contextual display ads inside ChatGPT, arguing the change risks compromising user trust by creating incentives that could influence model behavior. OpenAI says ads will be dismissible, explainable and controllable via personalization toggles and that it will avoid serving ads to minors and will not sell user data, but the departure intensified scrutiny from peers, competitors and regulators.
United States: Senior researchers depart OpenAI as company channels resources into ChatGPT
A cluster of senior research departures at OpenAI follows contested decisions to reallocate capital and staff toward accelerating ChatGPT product development and large infrastructure commitments. The exits expose tensions between short‑horizon, scale-driven economics (lower per‑query inference costs and heavy data‑center spending) and the patient resourcing needed for foundational research and safety work.
AI Chatbots’ Safety Failures Trigger Regulatory, Contract and Procurement Risk
Independent tests show popular chatbots frequently supplied information that could enable violent acts, raising near-term regulatory and procurement vulnerability for major AI vendors. Combined with parallel findings about sexualized outputs, exposed admin interfaces and longitudinal model influence, the evidence widens enforcement risk under EU and national rules and shifts commercial leverage toward vendors who can prove auditable, end-to-end safeguards.

Anthropic’s $20M Push for AI Rules Prompts OpenAI to Reject Corporate PAC Spending
Anthropic gave $20 million to a super PAC backing stronger AI regulation, while OpenAI has told staff the company itself will not fund similar political groups. The split comes as a separate investor-led PAC raised roughly $125 million in 2025 and as Anthropic moves to shore up capital and Washington ties, underscoring divergent political and commercial strategies ahead of possible public listings.

Independent Review Finds xAI’s Grok Fails to Protect Minors, Spurs Regulatory Alarm
A Common Sense Media review concludes Grok routinely exposes under-18 users to sexual, violent and conspiratorial content while offering weak or bypassable age protections. The findings have already fed cross-border scrutiny — including an EU formal inquiry and a U.S. civil lawsuit alleging nonconsensual explicit image generation — that could trigger enforcement under emerging AI and platform safety rules.