
Investigation Finds App Stores Hosting Scores of AI ‘Nudify’ Tools, Exposing Policy Gaps
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you
TikTok Bans AI-Generated Sexualised Black Avatars After Investigation
TikTok removed 20 accounts using realistic AI avatars that funneled users to paid explicit sites; related Instagram networks remain under scrutiny. Parallel reporting finds similar market-scale harms in app stores and accelerating regulatory action in Europe, underscoring cross-platform and cross-jurisdictional risks.

Independent Review Finds xAI’s Grok Fails to Protect Minors, Spurs Regulatory Alarm
A Common Sense Media review concludes Grok routinely exposes under-18 users to sexual, violent and conspiratorial content while offering weak or bypassable age protections. The findings have already fed cross-border scrutiny — including an EU formal inquiry and a U.S. civil lawsuit alleging nonconsensual explicit image generation — that could trigger enforcement under emerging AI and platform safety rules.
Surveillance, security lapses and viral agents: a roundup of risks reshaping law enforcement and AI
Recent coverage links expanded government surveillance tooling to broader operational risks while detailing multiple consumer- and enterprise-facing AI failures: unsecured agent deployments exposing keys and chats, a child-toy cloud console leaking tens of thousands of transcripts, and a catalogue of apps and model flows that enable non-consensual sexualized imagery. Together these episodes highlight how rapid capability adoption, weak defaults, and inconsistent platform enforcement magnify privacy, legal and security exposure.
European Commission Opens Probe of X’s Grok Over AI-Generated Sexual Imagery and Possible CSAM
The European Commission has launched a formal investigation into X’s deployment of the Grok AI model to determine whether it allowed the creation or spread of sexually explicit synthetic images, including material that may meet the threshold for child sexual abuse images. The probe follows reporting and parallel legal and regulatory action in multiple jurisdictions — including a lawsuit from a woman alleging non-consensual sexualized images, national blocks on the service, and inquiries from UK, French and U.S. authorities — and will test X’s risk controls under the Digital Services Act.

Apple Tightens App Store Access with Age Verification Measures
Apple has activated platform-level age checks and published a Declared Age Range API to help developers comply with new local laws; simultaneously, Brazil is preparing a federal decree that would extend mandatory certified age attestations across storefronts, content platforms and the ad ecosystem, forcing a design choice between identity-based checks and privacy-preserving attestations. The combined shift accelerates platform-centered enforcement, raises privacy and compliance-cost risks, and is likely to spur a market for cryptographic age‑attestation services.

Google Play tightens defenses — blocks 1.75M policy-violating apps in 2025
Google says improved automation and expanded post-publish checks stopped 1.75 million policy-violating app submissions in 2025 (down from 2.36M in 2024). The company also rolled out stronger sideloading controls — verification gates, targeted warnings and a phased, region-by-region rollout that keeps an opt-out path for advanced users — to make outside-store installs riskier for typical users while feeding telemetry into detection systems.

OpenAI: ChatGPT record exposes transnational suppression network
OpenAI released internal records showing a coordinated campaign using ChatGPT entries to run harassment and takedown operations against overseas critics. The disclosure links a large actor network — involving hundreds of operators and thousands of fake accounts — to real-world misinformation and platform abuse, sharpening regulatory and security pressures.

EU advances ban on AI-created child sexual imagery
EU governments moved to insert an explicit ban on AI-generated child sexual imagery into the bloc’s AI Act, accelerating cross-border legal pressure on platforms and model developers. The move comes amid a patchwork of national criminal probes, a Brussels inquiry into X’s Grok, civil litigation over generated images, and domestic legislative pushes that together raise immediate compliance and reputational stakes for major platforms.