
Russia's synthetic-video campaigns accelerate disinformation reach
Context and chronology
A coordinated wave of lifelike synthetic clips has surged across social feeds this season, targeting Western institutions and partners supporting Kyiv. Researchers link much of the distribution to organised influence networks that repurpose authentic footage, layer AI-generated audio and imagery, then amplify content through recycled, compromised or purpose-built sockpuppet accounts to create the appearance of grassroots traction. Professor Alan Read first noticed a manipulated clip using his image and voice, a pattern that analysts say mirrors a broader campaign architecture intended to erode trust in EU governments and Kyiv’s backers. Platforms have removed accounts and content, but takedowns routinely lag the pace at which these clips capture public attention.
Tactics, actors and technical levers
Operators combine commodity video-synthesis toolchains with cheaper, under‑regulated alternatives that omit provenance watermarks, enabling bespoke impersonations at low cost. Investigators and platform disclosures point to two complementary misuse pathways: direct in‑app generation and high-volume production using harvested model outputs. OpenAI investigators, for example, reported internal chat logs and in‑app records that connected operational planning notes to live posts and fraudulent takedown claims; in parallel, industry disclosures from other labs describe large-scale model‑extraction activity that could supply the raw material for rapid content generation. Analysts have connected named clusters — including operations labelled Matryoshka and Storm-1516 — to coordinated narratives aimed at elections and policy debates. The growing supply of lower-cost generative apps that skip safety features expands the attack surface and accelerates misuse, creating influence-at-scale from modest budgets.
Scale, forensic traces and observed tradecraft
Forensic trails differ by case: some findings rest on cross‑platform matches to chat transcripts and activity logs (which tie planning notes to postings), while others derive from aggregate telemetry and query signatures that suggest mass extraction of model outputs. Where disclosed, these investigations indicate hundreds of human operators and thousands of inauthentic accounts used to amplify messages, impersonate authorities, submit forged paperwork and file fraudulent takedowns. That mix of human direction and automation produces repeated, cross‑platform amplification designed to overwrite genuine signals and hinder attribution.
Operational effects and governance pressure
Platforms face an acute trade-off between rapid content moderation and protecting legitimate speech, and existing legal frameworks were not written for generative media. Public agencies in affected states have launched probes and demanded platform accountability after episodes that attracted hundreds of thousands of views and prompted coordinated removals. The convergence of model extraction, direct in‑app misuse and proliferating low-cost clones means adversaries can increase production speed and reduce attribution certainty — raising the bar for forensic investigators and the burden on small moderation teams. Expect policy responses to prioritise provenance transparency, telemetry and forensic‑signal sharing across companies, faster cross‑platform takedowns and liability pressure on apps that disable safety features.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

Iranian State Media Rewires War Narrative with Synthetic Content
Iranian state broadcasters have blended authentic reporting with synthetic audio and imagery to amplify casualty claims and nationalist messaging amid a near‑nationwide communications blackout. Commercial satellite imagery, contested casualty tallies, and organised influence operations have together increased verification costs, shifted evidentiary power toward well‑resourced actors, and intensified diplomatic and economic fallout.

X reports mass account takedowns after state-backed manipulation campaigns
X says it removed roughly 800M accounts during 2024 to disrupt coordinated, state-linked manipulation and spam. Industry disclosures from AI firms and other platform forensics suggest these campaigns can combine high-volume automated account creation with smaller, human-directed, AI‑assisted operations — complicating attribution and raising calls for cross‑industry telemetry sharing and provenance standards.

X Tightens Creator Monetization for Undisclosed AI War Videos
X will suspend creators from revenue sharing for 90 days if they publish AI‑generated armed‑conflict footage without clear disclosure. The platform links disclosure to monetization eligibility and will act on Community Notes flags, metadata signals, and other generative‑AI indicators — but the company has offered few public details about detection thresholds or an appeals process, raising risks of misclassification and calls for transparent provenance standards.

Viktor Orbán — Russian influence operation escalates before vote
Reports indicate Russian state actors have stepped up hidden influence efforts to sustain Viktor Orbán politically, using media amplification, proxy networks and opportunistic energy leverage. New indicators — including a halted Druzhba pipeline, Budapest’s law-and-order rhetoric towards NGOs, and public U.S. assurances of possible financial support — widen the geopolitical stakes for EU cohesion and NATO unity over the next six months.
Trust Undone: How AI Is Reforging Social Engineering into an Industrial-Scale Threat
Generative and agentic AI are enabling deception campaigns that scale personalized manipulation to millions, shifting the primary attack vector from technical flaws to exploited trust. Organizations and states face a widening threat that blends deepfakes, automated reconnaissance, and commoditized fraud tools, forcing a rethink of detection, workflow controls, and human-centered defenses.

OpenAI: ChatGPT record exposes transnational suppression network
OpenAI released internal records showing a coordinated campaign using ChatGPT entries to run harassment and takedown operations against overseas critics. The disclosure links a large actor network — involving hundreds of operators and thousands of fake accounts — to real-world misinformation and platform abuse, sharpening regulatory and security pressures.

Russia Accelerates Iran's Drone Capabilities, UK Warns
The UK defence secretary says Russian know-how is sharpening Iranian drone tactics, after Iranian-linked strikes struck bases including one in Cyprus and two drones were downed over Erbil. The shift raises short-term risks to Gulf shipping, energy markets, and Western force posture across Iraq and the Levant.
Russia's Digital Ministry Moves to Curb Foreign AI
A draft from Russia’s Ministry for Digital Development would restrict cross‑border AI inference, force models with more than 500,000 daily users to keep Russian user data onshore for three years, and embed cultural‑content controls that advantage domestic vendors. While the measures would accelerate market share for state‑aligned providers, parallels with other countries’ industrial policies and persistent hardware, energy and financing constraints suggest full foreign exclusion or rapid onshore substitution may be difficult to achieve quickly.