
Google DeepMind restricts Antigravity access, cutting OpenClaw integrations
Antigravity access restricted — ecosystem consequences
On February 23, Google DeepMind moved to disable third-party integrations that routed requests to its Antigravity runtime after detecting large-scale, abusive traffic patterns that degraded service for paying customers.
The cut affected developers linking the open-source agent OpenClaw to Antigravity and, in some cases, caused temporary account lockouts for individuals who had connected those agents to core Google accounts.
Google framed the intervention as capacity and misuse control — throttling a specific access pathway that multiplied requests for Gemini tokens through intermediary platforms — rather than a blanket ban on external integrations.
The timing compounds geopolitical product positioning: weeks after OpenClaw’s lead joined a rival lab, this enforcement severs a convenient bridge between an OpenAI-adjacent toolset and Google’s frontier models.
Developers and users reported disruption to workflows that relied on agentic orchestration, demonstrating how quickly a provider-side policy decision can ripple into identity and productivity friction.
Technically, the episode exposes tension between open-agent extensibility and a provider’s need to protect runtime quality for paid tiers, including high-cost subscriptions such as the $250/month Ultra level.
This is not isolated: other model vendors have introduced fingerprinting and rate controls to block third-party wrappers, signaling a broader industry move toward controlled access enclaves.
For enterprises, the incident highlights two trade-offs: lower friction when using managed model stacks versus the operational risk of third-party agent dependencies tied to core identity providers.
OpenClaw’s maintainers signaled a formal removal of Google-specific support, increasing the odds that users will migrate to either vendor-aligned agent platforms or fully self-hosted stacks.
Operationally, teams that baked Antigravity into business logic now face remediation choices: re-architect to run agents inside VPCs, negotiate direct API contracts, or accept intermittent access constraints.
Strategically, the event crystallizes a shift toward vertically integrated agent ecosystems where telemetry, subscription revenue, and guardrails are centrally controlled by model owners.
Short-term, expect tightened ToS enforcement, more restrictive token issuance, and a wave of orchestration vendors pitching governed OpenClaw-compatible gateways for enterprises.
Mid-term, organizations must weigh higher API and hosting costs against the fragility of relying on third-party agent wrappers connected to identity-critical services.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

OpenClaw Use Curbed Across Chinese State Agencies and Banks
Chinese authorities have ordered state bodies and major banks to halt installing OpenClaw on workplace devices after researchers exposed a coordinated supply‑chain poisoning campaign, reachable gateways and a client‑side gateway flaw (CVE‑2026‑25253). The advisory has already paused pilots, spurred token rotations and audits, and is likely to accelerate preference for vetted domestic AI stacks while complicating access for foreign vendors.

Google Keeps Anthropic Services Available for Non‑Defense Customers
Google said Anthropic’s models will remain available to commercial customers on Google Cloud platforms while explicitly excluding Department of Defense uses after a White House/Pentagon supply‑chain designation; the move preserves enterprise continuity but intersects a broader, contested procurement fight that risks a roughly $200M defense award and has spurred legal, policy and workforce frictions.
Google restricts AI-sourced bug reports, backs $12.5M open-source security fund
Google will no longer accept low-evidence AI-generated bug submissions to its open-source vulnerability reward program and is contributing to a $12.5M industry fund to build maintainer triage tools. This move balances stricter intake criteria with pooled funding for automated triage and maintainer assistance.
Agile Robots signs research pact with Google DeepMind to embed Gemini models
Agile Robots has sealed a multi-year research pact with Google DeepMind to integrate Gemini Robotics models into industrial robots and to share field telemetry with the model owner. The deal coincides with Google's consolidation of robotics software (Flowstate/Intrinsic) into its central Cloud and AI organization — a move that eases model serving and commercial bundling but heightens contractual questions about telemetry, model updates and enterprise controls.

Google integrates Intrinsic robotics software into core AI and Cloud stack
Google moved Intrinsic's Flowstate robotics platform into its main Cloud and AI teams, aligning robotics software with Gemini and Cloud infrastructure to accelerate enterprise automation. The reorientation pairs Flowstate with sales channels and partners such as Foxconn and Nvidia and signals a push toward robotics-as-a-service that will reshape systems integrator dynamics and procurement patterns.
Runlayer introduces enterprise governance for OpenClaw agent security
Runlayer released a commercial governance layer that discovers unmanaged OpenClaw agents and enforces low-latency controls to stop dangerous tool calls and credential exfiltration. The product combines endpoint/cloud discovery, SIEM integration, identity-aware policy enforcement and sub-100ms interception; internal tests and customer pilots show large gains against prompt-based takeovers and exfiltration chains.
Publishers Restrict Internet Archive Access as AI Scraping Risks Rise
Several major news organizations are blocking the Internet Archive’s crawlers amid worries that AI companies could use the Archive as a conduit to collect paywalled journalism. The change intensifies legal and commercial conflicts over training data and raises short-term risks to public access and long-term questions about how journalistic content will be governed for AI use.
Google warns of large-scale prompting campaign to clone Gemini
Google disclosed that actors prompted its Gemini model at scale to harvest outputs for use in building cheaper imitations, with at least one campaign issuing over 100,000 queries. The company frames the activity as theft of proprietary capabilities and signals a rising threat vector for LLM operators, with technical and legal consequences ahead.