
Anthropic-backed PAC injects cash behind Alex Bores after attack by pro-AI super PAC
Anthropic bankrolls a defensive push for Alex Bores amid industry ad war
A newly active safety‑aligned committee has stepped into the New York 12th district contest after Assembly member Alex Bores drew a series of outside attack ads. The group, Public First Action, has been able to run targeted outreach for Bores thanks to a multimillion‑dollar infusion from Anthropic, a donor that disclosed a roughly $20 million payment into political vehicles backing federal guardrails on advanced AI.
The coordinated counterpush comes after a concentrated ad blitz by an industry‑aligned coalition described here as Leading the Future, whose benefactors include venture investors, AI founders and private‑equity backers. That coalition has marshaled large resources—industry reporting places pro‑industry fundraising into the low‑hundreds of millions — and has already bought more than $1 million in ads targeting Bores’ candidacy.
Bores is the sponsor of proposed disclosure and incident‑reporting requirements that would compel major developers to reveal safety practices and report serious misuse. That bill, and others like it, have become the focal point for electoral spending: outside committees are not just backing candidates, they are investing in one of two governance templates—tighter, enforceable transparency rules versus industry‑preferred, nationally coordinated frameworks that emphasize certification, interoperability and voluntary standards.
The New York contest is being read as a proxy fight. On one side, safety‑oriented donors such as Anthropic and allied advocates are pushing lawmakers toward statutory disclosure, reporting thresholds and enforcement tools. On the other, investor‑led coalitions and founders are financing a push for national preemption, infrastructure grants and certification regimes that would reduce the compliance burden for large providers.
Context: separate reporting shows a related investor‑led PAC raised roughly $125 million during 2025 and carried about $70 million into the new year, underscoring the scale of private political capital now focused on AI policy. Some high‑profile individuals tied to pro‑industry efforts have also made large personal donations, even where their companies (notably OpenAI) have chosen to limit or avoid corporate donations to super PACs.
Anthropic’s corporate posture extends beyond direct political spending. The company has used national marketing—most prominently a Super Bowl spot highlighting an ad‑free positioning for its Claude assistant—to frame trust and privacy as competitive advantages. That public stance makes its political donations more visible and links commercial messaging to legislative advocacy.
Tactically, Public First Action is concentrating its budget on targeted voter outreach in NY‑12 and on messaging that ties Bores’ record to concrete safety reforms. The industry coalition’s ads emphasize perceived vulnerabilities in Bores’ candidacy while warning against policies that could slow innovation or fragment the market with state‑by‑state rules.
Implications: expect more of these contests to attract dual funding streams—corporate donations to advocacy vehicles and large personal or investor contributions to pro‑industry committees—turning single races into test cases for national rule choices. For candidates, the calculus will become more about which governance model they implicitly endorse; for advocates, PAC spending is now an instrument for shaping regulatory design as much as electoral outcomes.
- How money moves: single large corporate gifts can rapidly underwrite focused ad buys and defensive outreach.
- Policy at stake: disclosure mandates and incident reporting versus industry‑oriented national frameworks.
- Broader theater: high fundraising totals from investor coalitions mean federal races will increasingly serve as laboratories for AI governance messaging.
The immediate effect of Anthropic’s backing is to blunt the reach of targeted attack ads and give Bores a stronger ground game; the longer arc is a deepening politicization of AI policy where outside spending will be deployed to shift the shape of federal rules, not just to elect or defeat individual candidates.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

Anthropic’s $20M Push for AI Rules Prompts OpenAI to Reject Corporate PAC Spending
Anthropic gave $20 million to a super PAC backing stronger AI regulation, while OpenAI has told staff the company itself will not fund similar political groups. The split comes as a separate investor-led PAC raised roughly $125 million in 2025 and as Anthropic moves to shore up capital and Washington ties, underscoring divergent political and commercial strategies ahead of possible public listings.
AI Industry Super PAC Banks $125M to Push National Rules, Targets State-Level Champions
A newly formed PAC backed by major AI investors and companies raised $125 million in 2025 and entered 2026 with roughly $70 million to deploy in federal races aimed at securing uniform national AI rules. The move dovetails with broader industry efforts to shape infrastructure and standards policy—such as calls for public compute, interoperability, portability and auditability—so that divergent state laws do not dictate the regulatory baseline.
Anthropic’s Super Bowl Ads Ignite a Public Clash With OpenAI
Anthropic used Super Bowl spots to dramatize its promise that Claude will remain ad-free, provoking a terse public rebuttal from OpenAI’s CEO about the depiction and OpenAI’s nascent ad tests. The exchange sharpens a commercial and ethical divide over whether conversational AI will be funded by ads or by subscriptions and enterprise contracts.

OpenAI Sees App Backlash After DoD Agreement; Anthropic Surges
OpenAI’s mobile app suffered a sharp consumer backlash after its deal with the U.S. defense establishment, triggering a one-day spike in uninstalls and review downgrades. Competing model provider Anthropic captured meaningful download gains and transient top App Store positions amid the reputational fallout.

Anthropic Blacklisting Triggers AI Market Shock
A White House‑led supply‑chain designation and de‑facto U.S. blacklist of Anthropic accelerated a broad market repricing across tech and catalyzed a high‑stakes political fight over AI procurement rules. The episode has already prompted roughly $125M in investor‑led pro‑industry political funding, a separate $20M company payment tied to Anthropic, and imperils a roughly $200M defense program with a six‑month migration window.

Anthropic adds Chris Liddell to board to strengthen political and regulatory positioning
Anthropic appointed veteran executive Chris Liddell to its board as part of a broader push to consolidate political and investor relations amid a very large financing and secondary activity. The move accompanies reports of major backers (including Sequoia and Blackstone), an employee tender at a roughly $350B price reference and a $20M contribution to a pro‑AI advocacy group, underscoring a coordinated capital‑and‑policy strategy.

Anthropic: Google and OpenAI Employees Rally After Pentagon Standoff
More than 300 Google and 60+ OpenAI employees publicly urged their leaders to back Anthropic’s refusal to grant broader Pentagon access to its models, a dispute that now risks a roughly $200 million Defense Department award and implicates four major vendors. The employee letter intensifies pressure on procurement practices, corporate political strategies and technical requirements for auditable, human‑in‑the‑loop deployments.

Pro-Human Declaration Pressures Washington on AI Controls
The Pro-Human Declaration — signed by hundreds across the political spectrum — demands enforceable safety measures (pre-deployment testing, reliable shutdowns and legal accountability) for powerful AI systems. Its release, coinciding with a Pentagon designation that limits Anthropic use in classified environments, has turned normative pressure into a near-term procurement and political fight that will shape which vendors keep government business.