
Google DeepMind's Demis Hassabis urges urgent research into AI risks
AI summit pushes safety research to the top of the agenda
At the international AI Impact Summit in New Delhi, Demis Hassabis made a concentrated appeal for expanded scientific work focused on the highest‑consequence risks from advanced artificial intelligence.
The meeting brought senior figures from more than a hundred countries alongside major industry executives and researchers — including names such as Sundar Pichai, Sam Altman, Dario Amodei, Yann LeCun and Arthur Mensch — underscoring commercial as well as diplomatic stakes.
Speakers identified two practical threat vectors that should guide immediate research priorities: malicious use by hostile actors exploiting improved generative capabilities, and autonomous or agentic systems operating beyond reliable human oversight.
New Delhi used the forum to press for concrete instruments — procurement conditions, compute‑scaling plans, data‑residency requirements and formal safety‑verification regimes — arguing that market access and buying power can steer vendor behaviour.
OpenAI representatives at the summit highlighted the platform’s rapid adoption in India — estimated at roughly 100 million weekly ChatGPT users — and recent pricing and access moves that give regulators leverage when negotiating conditional market terms.
A number of delegates, including Mistral’s chief executive Arthur Mensch, linked technical vulnerability to market structure, warning that concentrated control over core tooling and distribution creates gatekeeping incentives and systemic fragility.
Those commercial and security concerns were reinforced by a recent multinational assessment cited in side sessions, which documented rapid capability gains in coding, mathematics and task automation alongside brittle failures, operational security lapses and real incidents that lower the bar for large‑scale abuse.
The scale of global infrastructure investment — estimated around $1.5 trillion in 2025 and projected to grow substantially — was repeatedly referenced as a driver of concentration that could outpace regulatory responses if left unchecked.
Political divisions surfaced quickly: many delegations sought a coordinated communiqué or shared technical standards, but the US representatives signalled resistance to top‑down global governance, preferring lighter‑touch or national approaches to regulation.
Operational outcomes under discussion included mandatory pre‑release testing and adversarial red‑teaming, interoperable evaluation frameworks, mandatory provenance and audit trails for high‑risk systems, and procurement rules that favour auditable, non‑exclusive model access.
Industry responses presented at the summit ranged from proposals for local hosting partnerships and on‑device inference to cryptographic attestation, dataset lineage requirements and audit‑first deployment patterns intended to meet buyer and regulator expectations.
Education and workforce themes ran alongside regulation: speakers argued technical training and human judgement will remain competitive advantages as automation changes routine work, and that public procurement in education should guard against pedagogical risks.
- Operational outcome expected: a shared communiqué is likely when the summit closes, though its binding power and enforcement mechanisms remain uncertain.
- Coordination gap: US resistance to centralized oversight raises the prospect of fragmented, regional rule sets rather than a single global pact.
- Research and market emphasis: more funding and collaborative projects for adversarial testing, control techniques, and interoperable safety tooling are now likely priorities, alongside procurement‑driven vendor concessions.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you
UK-backed International AI Safety Report 2026 Signals Fast Capability Gains and Growing Risks
A UK‑hosted, expert-led 2026 assessment documents rapid, uneven advances in general‑purpose AI alongside concrete misuse vectors and operational failures, and — reinforced by industry surveys — warns that procurement nationalism and buyer demand for provenance are already shaping markets. The report urges urgent, coordinated policy and technical responses (stronger pre‑release testing, mandatory security baselines, procurement safeguards and interoperable standards) to prevent capability growth from outpacing defenses.

India AI Impact Summit Draws Global Tech Chiefs to Shape Frontier Models
India is hosting a major AI summit in New Delhi that assembles senior executives and top researchers to influence how frontier models are developed and governed. OpenAI told officials and attendees it now sees roughly 100 million weekly ChatGPT users in India and has rolled out low‑cost and limited free access plans, underscoring the market leverage New Delhi is using to press for compute residency, safety and education partnerships.
Global Risk Institute: Canadian finance told to harden AI governance
GRI-led forum urged Canadian financial institutions to elevate AI governance, shore up operational resilience, and invest in workforce readiness. The report centers on an AGILE Framework and signals coordinated regulator-industry action on AI-driven cyber, third-party and stability risks — a push reinforced by international assessments documenting operational security failures and growing infrastructure concentration.
U.S. White House AI Push Exposes Deep Rift in Republican Coalition
A private clash between a White House AI adviser and senior Trump-aligned figures crystallized a widening split in the Republican coalition over federal preemption and the pace of AI deregulation. The episode coincided with an accelerated, well-funded industry campaign — including large PAC coffers and calls for public compute and interoperability — that will push the policy fight onto Capitol Hill and into the courts.
Policy Forum Pushes for Steps to Secure U.S. Advantage in Artificial Intelligence
A Silicon Valley policy forum will press U.S. leaders for a coordinated strategy to sustain American AI leadership, linking investment, regulation and workforce measures. Organizers plan to foreground concrete remedies for infrastructure concentration — including public investment in open compute and mandates for portability and auditability — to avoid winner-take-most dynamics that could lock in foreign or private dominance.

Pro-Human Declaration Pressures Washington on AI Controls
The Pro-Human Declaration — signed by hundreds across the political spectrum — demands enforceable safety measures (pre-deployment testing, reliable shutdowns and legal accountability) for powerful AI systems. Its release, coinciding with a Pentagon designation that limits Anthropic use in classified environments, has turned normative pressure into a near-term procurement and political fight that will shape which vendors keep government business.
India’s classrooms are reshaping Google’s approach to AI in education
Google is using India as a high-stakes laboratory to adapt its educational AI—decentralizing control, prioritizing teachers, and designing for multimodal learning across uneven infrastructure. Those on-the-ground lessons contrast with centralized national rollouts such as China’s move to bake AI into mandatory IT curricula, underscoring how divergent country strategies will force vendors to build far more flexible, governance-aware products.

AI Concentration Crisis: When Model Providers Become Systemic Risks
A late-2025 proposal by a leading AI developer for a government partnership exposed how few firms now control foundational AI layers. The scale of infrastructure spending, modest funding for decentralized alternatives, and high switching costs create a narrow window to build competitive, interoperable options before dominant platforms lock standards and markets.