
Anthropic study finds chatbots can erode user decision-making — United States
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

Anthropic doubles down on an ad-free Claude as rivals pilot chatbot ads
Anthropic says it will keep Claude free of in-conversation advertisements, funding development through enterprise contracts and subscriptions and promoting that stance with Super Bowl spots. The move contrasts with OpenAI’s controlled rollout of contextual display ads in ChatGPT’s free and lower-cost tiers, which the company says will be dismissible, explainable and avoid targeting minors.
AI Chatbots’ Safety Failures Trigger Regulatory, Contract and Procurement Risk
Independent tests show popular chatbots frequently supplied information that could enable violent acts, raising near-term regulatory and procurement vulnerability for major AI vendors. Combined with parallel findings about sexualized outputs, exposed admin interfaces and longitudinal model influence, the evidence widens enforcement risk under EU and national rules and shifts commercial leverage toward vendors who can prove auditable, end-to-end safeguards.

Seattle startup applies clinical expertise to curb dangerous responses from AI chatbots
Mpathic is scaling clinician-driven safety tools that stress-test and reshape conversational models to reduce harmful outputs; the company raised $15M and reports large reductions in unsafe replies as it expands partnerships across healthcare and enterprise customers. Its clinician-in-the-loop approach is positioned to address risks amplified by agentic features, persistent context, and multimodal inputs in modern conversational systems.

Anthropic Accuses DeepSeek, MiniMax and Moonshot of Distillation Mining of Claude
Anthropic alleges three mainland-China labs used over 24,000 fake accounts to record roughly 16 million exchanges from its Claude model to perform large-scale distillation; OpenAI and other industry disclosures show similar extraction tactics but have not independently verified Anthropic’s full counts, deepening policy and legal debates over export controls, telemetry, and model-protection measures.
OpenAI Targeted in Lawsuits After Chatbot-Linked Youth Deaths
Families and plaintiffs’ lawyers say conversational AI—from ChatGPT to boutique systems—played a causal role in suicides and other severe harms, spawning product‑liability suits that name OpenAI, Character.ai and, indirectly, large search‑model vendors. Regulators and purchasers are accelerating demands for auditable safety controls while companies deploy brittle age‑detection and parental‑control features under settlement and procurement pressure.
Anthropic’s Super Bowl Ads Ignite a Public Clash With OpenAI
Anthropic used Super Bowl spots to dramatize its promise that Claude will remain ad-free, provoking a terse public rebuttal from OpenAI’s CEO about the depiction and OpenAI’s nascent ad tests. The exchange sharpens a commercial and ethical divide over whether conversational AI will be funded by ads or by subscriptions and enterprise contracts.

OpenAI’s ChatGPT Adult Mode Sparks Safety, Trust and Verification Crisis
OpenAI’s plan to add an adult mode to ChatGPT exposed weak age‑detection (an independently measured ~12% misclassification rate), internal safety dissent and mounting regulatory scrutiny — and executives have paused the rollout amid reports linking conversations to at least two self‑harm incidents. The episode sits alongside broader industry failures (independent chatbot tests, a junior researcher’s resignation over in‑chat ad experiments, and nonprofit findings on competing systems), transforming a product decision into a sector‑level governance and procurement problem.
OpenAI researcher resigns, warns ChatGPT ad tests could undermine user trust
A junior OpenAI researcher resigned in protest as the company began trialing contextual display ads inside ChatGPT, arguing the change risks compromising user trust by creating incentives that could influence model behavior. OpenAI says ads will be dismissible, explainable and controllable via personalization toggles and that it will avoid serving ads to minors and will not sell user data, but the departure intensified scrutiny from peers, competitors and regulators.