
Google: Public GCP API Keys Became Gemini Credentials, Exposing Data
Context and Chronology
Researchers at Truffle Security scanned public web archives and discovered that certain Google Cloud keys, long used merely to meter services, were suddenly accepted by the Gemini API as authentication tokens. The change appears to have been introduced when Google rolled out its generative language endpoints, producing an unannounced escalation in key privileges that left some projects readable by anyone who could view site source. During a November scan, Truffle flagged 2,863 live keys across commercial and public-sector accounts, including internal projects belonging to cloud-first enterprises.
Immediate Risk Profile
An exposed key no longer only allowed third-party embedding of maps or video metering; it could be leveraged to query or extract uploaded documents, cached conversation context, and other assets stored via the generative endpoint. Attackers able to scrape client-side HTML could both retrieve confidential material and run API calls that consume quota, creating the dual risk of data disclosure plus unexpected billing. Truffle reproduced an abuse scenario that mirrored a billing event of roughly $55,444, illustrating the tangible financial harm that follows credential misuse.
Broader Threat: Model Extraction and Competitive Abuse
Google has separately reported coordinated campaigns that repeatedly query Gemini at scale to collect outputs for training knockoff models. If an adversary combines mass-query tactics with leaked keys, the same exposed credentials can be used not only to harvest customer data but also to issue hundreds of thousands of prompts that produce material useful for cloning the model. Such combined abuse amplifies commercial and legal risk: incumbents face both customer data breaches and erosion of intellectual property value when model outputs are systematically harvested.
Response and Remediation Actions
After disclosure, Google restricted the flagged keys from accessing the generative endpoint and acknowledged the behavior as a bug, while continuing to work on a systemic fix beyond the immediate mitigations. Administrators are advised to enumerate any keys tied to the Generative Language API, rotate any public or unrestricted keys, and apply tighter constraints to usage and HTTP referrers. Defenders should also add telemetry to detect mass-query patterns, enforce stricter rate limits, and consider output watermarking or poison-injection strategies to reduce the utility of harvested responses. Google has indicated future controls that default new AI-created keys to Gemini-only scope and will block detected leaked keys, but those measures do not retroactively guarantee safe posture for legacy keys.
Strategic Implications
This event underscores a recurring governance gap: cloud primitives designed for public embedding were repurposed without provider-to-developer notification, producing service-level privilege drift across accounts. Enterprises with mixed client-side integrations must now treat previously benign tokens as sensitive credentials in the same class as service account secrets and API tokens. Security teams should incorporate automated scans for public key leakage into their CI/CD pipelines, augment monitoring to spot mass-query extraction attempts, and extend threat models to include generative-model data exfiltration, IP theft, and invoice manipulation vectors.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you
Google warns of large-scale prompting campaign to clone Gemini
Google disclosed that actors prompted its Gemini model at scale to harvest outputs for use in building cheaper imitations, with at least one campaign issuing over 100,000 queries. The company frames the activity as theft of proprietary capabilities and signals a rising threat vector for LLM operators, with technical and legal consequences ahead.

Google deploys Gemini agents across Pentagon unclassified networks
Google has provisioned Gemini-based agents to the Department of Defense’s unclassified networks to automate administrative and analytic workstreams, producing rapid uptake and exposing a large training shortfall. Parallel procurement tensions — including a supply‑chain designation affecting Anthropic, competing vendor negotiations for classified use, and uneven public accounts of which firms won restricted approvals — mean the move accelerates productivity while raising immediate governance, supply‑chain and legal hazards.

Google Gemini Tightens Grip on Workspace Productivity
Google expanded Gemini deeply into Workspace, enabling cross-file document, spreadsheet and slide generation from single prompts while marking premium access via AI Pro subscriptions and early enterprise access through Gemini Alpha. The update pairs productized reasoning advances (Gemini 3.x/Deep Think tuning) with a measured 9x Sheets speed claim, a Department of Defense pilot scale signal, and admin controls — creating immediate productivity upside but sharper platform‑capture and procurement tradeoffs for IT and security teams.

Google DeepMind restricts Antigravity access, cutting OpenClaw integrations
Google DeepMind suspended Antigravity access for OpenClaw-based integrations, citing abusive usage and service degradation. The action blocks a path to Gemini tokens and accelerates a shift toward closed, vertically controlled agent stacks.

Google prepares Gemini to act inside Android apps to place orders and book rides
A teardown of Google’s beta app indicates Gemini may gain an opt‑in ability to automate interactions inside third‑party Android apps—simulating taps and form fills to complete tasks like ordering food or hailing rides—backed by platform hooks, certified app support and human review of some interaction traces. The feature is drawing regulatory and legislative attention (including a letter from Senator Elizabeth Warren about in‑chat commerce), raising fresh questions about merchant signals, data flows, payment safeguards and the need for clear consent and disclosure.

Google launches Gemini Mac beta to pressure OpenAI and Anthropic
Google has begun a private beta of a native Gemini app for Mac, recruiting nonemployee testers to surface bugs and shape the product before a broader release. The Mac pilot is one piece of a wider productization push — from Workspace integrations and a Gemini 3.1 Pro preview to code references for agentic in‑app automation — that sharpens competition with OpenAI and Anthropic PBC while increasing regulatory and developer scrutiny.

Google trials Gemini tool to import rival AI chat histories (United States)
Google is experimenting with a Gemini function that would let users upload conversation archives from other chatbots so they can continue projects and preserve personalised context. If launched, the capability would lower switching friction, raise technical and privacy questions about memory mapping, and potentially accelerate user migration toward Gemini.
Warren Demands Details From Google on Gemini’s In‑Chat Checkout and Data Sharing
Sen. Elizabeth Warren has asked Google CEO Sundar Pichai for a detailed explanation of what user signals will be shared with retailers after Google announced a checkout feature for its Gemini chatbot, warning that combining conversational context, search history and merchant data could steer purchases and create opaque preferential treatment. The inquiry comes as reported commercial deals and investor scrutiny over Gemini’s licensing and cloud ties raise the stakes for how data, compute and revenue flows are governed.