
Smack Technologies Raises $32M to Build Military-Focused Models
Context, Technical Approach, and Procurement Implications
Smack Technologies announced a $32 million financing round to accelerate development of models aimed at defense mission planning and decision support. Company leadership blends recent special‑operations experience with commercial product engineering, signaling a deliberate focus on tools that support commanders' workflows rather than autonomous weapons employment. Smack describes its models as intended for sketching courses of action, deconfliction, and ranking options — explicitly for human‑in‑the‑loop planning tasks rather than sensor fusion or direct weapons control.
Technically, Smack trains models inside simulated war games where expert feedback is encoded as reward signals; that regimen echoes reinforcement approaches used in high‑performance agents but is constrained by domain‑specific adjudication and bespoke scenario datasets. The company says it is investing heavily in scenario‑driven training data, provenance tracking, and explainability so outputs can be audited by commanders and acquisition officials during evaluations.
This financing and product positioning arrives alongside a high‑profile rupture between the Department of Defense and Anthropic over contract terms for a roughly $200 million award. Sources say the Pentagon asked four leading model providers to accept expanded operational rights — deeper runtime access inside secure enclaves, richer telemetry, and fewer vendor usage constraints — so models can operate in time‑sensitive, classified workflows. Anthropic has been publicly resistant to some of those demands, citing nondisputeable safety commitments such as bans on autonomous weapons and safeguards against mass domestic surveillance.
The dispute crystallizes a core tension: defense buyers pressing for broader access, provenance, and forensic telemetry to make model outputs actionable and auditable inside classified systems, while vendors insist on human‑in‑the‑loop limits and contractual protections to avoid being complicit in downstream uses. Technical factors complicate the tradeoff: Anthropic's recent Claude Opus 4.6 and similar large models expand context windows and multi‑step agentic abilities, making them technically useful for extended operational workflows, yet those same capabilities raise vendor legal and reputational concerns when coupled with broad runtime access.
For acquisition, the immediate consequence is an enlarged supplier set and a potential shift in how program offices draft statements of work. Boutiques like Smack can credibly bid for low‑to‑mid value task orders by offering narrowly scoped, auditable mission‑planning modules that accept defense‑specified telemetry and hosting constraints; primes, by contrast, offer systems‑level integration and heavier assurance pedigrees but may move more slowly. Contracting officers face a tradeoff between speed and the systems engineering, liability, and sustainment benefits that larger integrators provide.
Policy and governance effects are likely to follow quickly: lawyers and acquisition officials are said to be drafting standard terms around provenance, mandatory logging, third‑party audits, red‑teaming, and explicit human‑authorization requirements for high‑assurance deployments. If the Pentagon secures broader access by contracting with vendors willing to concede more rights, operational adoption could accelerate — but at the cost of placing heavier compliance, liability and auditing burdens on suppliers. Conversely, a paused or rescinded award would give boutique vendors a longer runway as the DoD reworks procurement templates.
Risk remains acute. Academic and internal experiments show that language models can amplify escalation dynamics in simulated crises and remain brittle under adversarial conditions; without rigorous verification, state estimation, and secure sensor inputs, mission‑planning agents could produce misleading or hazardous recommendations. That reality underscores the need for mandatory human‑in‑the‑loop controls, adversarial testing, and forensic telemetry when models inform high‑stakes decisions.
In sum, Smack's funding is simultaneously a market signal and a test case: it demonstrates buyer appetite for niche vendors able to accept defense constraints, while also surfacing the unresolved governance and assurance questions that will shape whether these boutique solutions scale in operational environments.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

Advanced Machine Intelligence raises $1B to commercialize world models
Advanced Machine Intelligence closed just over $1 billion at a roughly $3.5 billion valuation to commercialize physics‑grounded world models, with Yann LeCun leading scientific direction toward manufacturing, robotics and biomedical pilots. The deal arrives as multiple labs and startups—some anchored by hardware and cloud partners—secure large rounds, revealing a broader, heterogeneous venture wave into alternative model architectures and strategic compute partnerships.

OpenAI tapped to build voice-to-command interface for U.S. military drone swarms
OpenAI is collaborating with two defense contractors chosen by the Pentagon to build a spoken-language interface that converts commanders’ vocal orders into machine-readable commands for drone swarms, with OpenAI’s role confined to translation rather than flight, targeting, or weapons control. The effort comes as the Defense Department presses commercial AI vendors to make models usable inside more secure and even classified networks, intensifying procurement, supply-chain and vendor-lock concerns while raising demands for hardened hosting, provenance tracking and auditability.

Anthropic recruits weapons-policy expert to curb model misuse
Anthropic is hiring a specialist to harden model guardrails against chemical, radiological and explosives misuse while OpenAI has advertised a higher-paid, adjacent role. This signals a rising safety talent arms race that will reshape procurement, regulation, and vendor trust across the AI ecosystem.
Larus Technologies wins $8.3M DND IDEaS award to field MAABI
Larus Technologies secured a $8.3M IDEaS Test Drive award to advance MAABI for tactical decision support with the Canadian Army. The contract funds AI/ML integration into ISR-led planning and will move MAABI into allied training events.
Hark Rewires Consumer AI with Model–Hardware Stack
Hark, backed by $100M from founder Brett Adcock , is building tightly coupled multimodal models and custom interfaces to push consumer-grade persistent intelligence. The startup plans a GPU ramp in April and has hired design lead Abidur Chowdhury , signaling a bet on productized AI beyond apps — though that timetable is exposed to industry-wide memory, DRAM and allocation constraints that could affect April capacity targets.

Overland AI raises $100M to accelerate off‑road autonomous vehicles for U.S. forces
Seattle startup Overland AI closed a $100 million financing round led by 8VC to expand production and delivery of its off‑road autonomous vehicles for U.S. military units and other agencies. The cash injection follows completion of a DARPA resilience program and underpins plans to move autonomy from testing into operational use in complex, GPS‑denied environments.
Runway Raises $315M Series E at $5.3B to Accelerate World-Model Ambitions
Runway closed a $315 million Series E that values the company at about $5.3 billion and bankrolls an aggressive push to build larger, more capable world models for applications beyond media. The round, led by General Atlantic with strategic participation from hardware and software partners, also underlines investor confidence in Runway’s advances in video generation and its infrastructure plans to scale compute-intensive research and products.

Shield AI pursues up to $1 billion financing, targeting roughly $12 billion valuation
Shield AI is negotiating a large private financing that could inject as much as $1 billion and lift the company's valuation to about $12 billion after the round. The discussions remain fluid and any final terms — including price and investor composition — could change before a deal closes.