
API Attacks Surge as AI Expands the Blast Radius; Wallarm Flags MCP Risk
APIs now lead attack campaigns and AI-driven agents are widening the window for damage. Wallarm’s review of more than 60,000 disclosed flaws recorded roughly 11,000 API-related issues (about 17%), while CISA entries attributed 43% of exploited vulnerabilities in 2025 to APIs.
Exploitability metrics are stark: the vast majority of API flaws are trivial to weaponize, often with a single HTTP call and without credentials. High-profile incidents affecting 700Credit, Qantas, and Salesloft illustrate how exposed interfaces translate to large-scale data theft and operational impact.
A fast-growing vector is the control-plane concept known as the Model Context Protocol (MCP), which links language models to tools and data sources. Wallarm logged 315 MCP-related faults in 2025 and observed a 270% jump from Q2 to Q3, signaling rapid risk accumulation as adopters deploy bespoke MCP servers.
MCP weaknesses typically combine three conditions: agents given broad privileges, APIs exposed without sufficient hardening, and a lack of runtime policy enforcement. Those three failure modes let attackers control autonomous flows rather than only attacking isolated endpoints.
Beyond raw vulnerability counts, improvements in generative models and coordinating agents are shortening the gap between disclosure and practical exploitation. Programmatic reconnaissance and agentic toolchains let adversaries quickly stitch contextual information and craft multi-step attacks that abuse logic and trust rather than relying on subtle code defects.
The human attack surface is changing too: high-fidelity synthetic media and automated persona generation make credential theft and session hijacking more valuable, because forged artifacts and convincing lures can turn stolen access into persistent, high-quality footholds. Commodity AI toolkits lower the skill floor, enabling a larger pool of operators to execute sophisticated, adaptive playbooks.
This shift pushes defenders away from purely signature-based controls toward behavioral telemetry, cross-domain signal fusion (endpoint, identity, cloud, browser), and faster containment and validation workflows. Wallarm recommends prioritizing runtime enforcement, strict token and session governance, least-privilege for agents, and continuous API posture monitoring—controls that limit what an attacker can do even after initial access.
Operational controls gaining traction include agent identity attestation, human-in-the-loop gates for high-impact actions, and multi-party verification for sensitive data flows. Together these measures constrain autonomous workflows and reduce the value of weaponized synthetic content.
Regulatory pressure and incident disclosures are already changing incentives: faster breach disclosure and stiffer penalties increase the cost of lagging mitigation, while investment is flowing into resilient, verifiable automation and tools that enable deterministic recovery.
In short, APIs and emerging control-plane standards like MCP concentrate risk in configurable server implementations, and AI accelerates adversary operations—so organizations must couple behavioral detection with strict governance to blunt an expanding blast radius.
- Total disclosures analyzed: 60,000+
- API-related vulnerabilities: 11,000 (~17%)
- CISA KEV share attributed to APIs: 43%
- MCP-related vulnerabilities (2025): 315
- MCP Q2→Q3 growth: 270%
- Exploitability: 97% single-request, 98% easy/trivial, 99% remotely exploitable, 59% require no authentication
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

CrowdStrike: AI-Driven Attacks Surge and Collapse Detection Windows
CrowdStrike reports an 89% rise in AI-enabled attacks and an average breakout time of 29 minutes (fastest observed: 27 seconds). Independent industry reporting (IBM, Amazon, vendor incident timelines) shows related but differently scoped increases — compressed exploit windows, automated reconnaissance campaigns that commandeered hundreds of perimeter devices, and rapid moves from disclosure to active targeting — underscoring an urgent need for cross-source telemetry, identity-first controls, and faster containment playbooks.
U.S. security roundup: AI-enabled attacks rise, 277 water systems flagged, Disney hit with $2.75M fine
Adversaries are increasingly integrating generative models and automated agents into fast-moving attack chains while federal disclosures and vendor research expose concrete infrastructure and supply‑chain gaps—from 277 vulnerable water utilities to a configuration flaw affecting about 200 airports. Regulators and vendors responded with fines, guidance and new attribution frameworks, but rapid exploit timelines and legacy OT constraints mean systemic exposures will persist without accelerated patching, stronger identity controls and tighter vendor oversight.
IBM: AI-Driven Attacks Surge, North America Becomes Primary Target
IBM X-Force finds AI-accelerated campaigns concentrating in North America and a 44% year-over-year jump in public-facing app exploits; industry observers also report fast-moving, agentic automation incidents (including mass firewall and rapid-vulnerability-exploit examples) that compress remediation windows and elevate identity and AI-endpoint risk.
US and Global Outlook: AI Is Rewiring Malware Economics and Attack Paths for 2026
Advances in agentic and generative AI are accelerating attackers’ ability to discover vulnerabilities, craft tailored exploits, and scale precise intrusions, while high‑fidelity synthetic media amplifies social‑engineering at industrial scale. Organizations that rely solely on basic hygiene will be outpaced; defenders must combine rigorous fundamentals with identity‑first controls, behavioral detection, and governed AI playbooks to blunt this shift.
CX platforms enable AI-driven lateral breaches in enterprise stacks
Customer-experience platforms are becoming unmonitored conduits attackers exploit to move laterally into core systems; a recent token theft exposed access across 700+ Salesforce instances and showed that traditional DLP and perimeter controls miss sensitive, free-text disclosures. Defenders must pair CX-layer input hygiene and API gating with identity-first controls — machine-identity inventories, automated rotation and cryptographic attestations — because stale service tokens and non-human credentials are the fastest-growing enablers of lateral movement.

Amazon: Hackers Used AI to Breach 600+ Firewalls in Weeks
Security teams at Amazon traced a compact, likely Russian‑speaking operation that used widely available AI tooling and automated agents to compromise more than 600 perimeter firewalls across roughly 55 countries in about five weeks. The campaign—which automated reconnaissance, credential validation and rapid probing—typifies a broader 2026 trend in which off‑the‑shelf AI compresses the time from discovery to exploitation, forcing defenders to treat exposed management interfaces and self‑hosted AI endpoints as high‑risk assets.
Docker’s Ask Gordon AI flaw lets image metadata trigger remote code execution and data theft
A critical vulnerability in Docker’s Ask Gordon AI assistant lets malicious data embedded in image labels be treated as executable instructions, enabling remote code execution on cloud/CLI setups and data exfiltration from desktops. Docker released Docker Desktop 4.50.0 with mitigations that block tag-based exfiltration and require explicit user confirmation before executing MCP tools.
Critical OpenClaw Flaw Enabled Remote Hijack Through Malicious Web Page
A newly disclosed OpenClaw vulnerability (CVE-2026-25253) let a single malicious webpage steal a browser-exposed token and escalate it into full gateway access and host command execution; OpenClaw released a fix in 2026.1.29. Independent scans and research also found large-scale operational exposure—including hundreds of internet-reachable admin interfaces, unmoderated Moltbook skill posts with hidden prompt‑injection fragments, and separate misconfigurations that leaked millions of API tokens and tens of thousands of emails—so operators must patch, revoke keys, inventory reachable instances, and tighten access and content‑distribution controls immediately.