
G42 and Cerebras to deliver 8 exaflops of AI compute infrastructure in India
G42–Cerebras supercomputer lands in India
G42 has partnered with Cerebras to deploy an on‑shore installation delivering about 8 exaflops of AI processing capacity in India. The platform will be hosted and governed to meet Indian data‑residency and sovereignty requirements, with domestic governance and academic participation including Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) and the Centre for Development of Advanced Computing (C‑DAC).
The system is being targeted at universities, government laboratories and small‑to‑medium enterprises, with prioritized access intended to let large‑model training and inference operate fully within India’s legal perimeter. Operational specifics — precise service tiers, pricing, scheduling and whether offerings will be purely managed or include on‑premises options — were not disclosed at the summit, leaving questions about how broad access will be and what booking or subsidization models will be used.
The announcement came amid a broader New Delhi summit push that set an ambitious national AI investment agenda and catalyzed a near‑term market reaction: listed firms tied to data‑center equipment, power and cooling and colocation recorded an estimated combined market‑capitalization lift of roughly $4 billion over the week. That market response reflects investor belief that political signaling plus commercial commitments from anchors can accelerate procurement and capacity plans.
Complementary capacity programs were highlighted at the event — including Yotta’s large Nvidia‑backed campus plans, an OpenAI–Tata initial 100 MW project with a roadmap toward 1 GW, and AMD–TCS Helios reference‑rack collaborations targeting up to 200 MW — creating a clustered demand signal for accelerators, servers and high‑density racks.
Market participants and analysts cautioned that converting summit momentum into durable capex will require binding memoranda, land allotments, procurement or tariff adjustments and concrete supply agreements — not just announcements. Local power‑system upgrades, grid interconnects and cooling‑infrastructure suppliers are among the immediate beneficiaries if projects reach construction; they also represent near‑term bottlenecks if interconnection and permitting lag.
Global accelerator and HBM shortages, packaging and test constraints, and multi‑year allocation cycles remain primary execution risks. Those supply issues, combined with permitting and land timelines, make a phased, multi‑site rollout more likely and suggest meaningful portions of planned capacity will come online over 24–36 months.
Beyond throughput, the G42–Cerebras deployment is being positioned as a sovereignty instrument: keeping sensitive datasets and model weights inside India while leveraging foreign hardware and system integration. The cross‑jurisdictional arrangement — a UAE integrator and a U.S. silicon vendor operating under Indian governance — is increasingly the template for sovereign compute projects that balance capability with domestic control.
For researchers and smaller teams, on‑shore exascale capability shortens experiment cycles and reduces compliance friction for sensitive workloads. For markets and policymakers, the core challenge will be ensuring announced intent translates to signed contracts, supply commitments and usable capacity rather than transient sentiment that inflates valuations in the near term.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

AMD deepens India push with TCS to deploy Helios rack-scale AI infrastructure
AMD and Tata Consultancy Services will roll out AMD’s Helios rack reference design across India in a partnership that packages AMD’s hardware stack with TCS’s local systems‑integration skills, targeting up to 200 megawatts of aggregated AI compute capacity. The program shortens procurement-to-live timelines but faces the same execution risks seen in other large-scale AI builds — municipal permitting, transmission and substation upgrades, chip and packaging supply limits, and the potential for idle capacity if build‑out outpaces verified demand — which could stretch deliveries into a 24–36 month window.
Australian AI infrastructure firm wins $10B financing to accelerate data‑center buildout
Firmus Technologies closed a $10 billion private‑credit facility led by Blackstone‑backed vehicles and Coatue to underwrite a rapid roll‑out of AI‑optimized campuses in Australia. The debt package targets deployment of Nvidia accelerators and up to 1.6 gigawatts of aggregate IT power by 2028, embedding the project in a wider global wave of specialized, high‑power data‑center financing.

Meta breaks ground on $10 billion Indiana campus to expand AI compute
Meta has begun building a roughly $10 billion data‑center campus in Indiana to scale GPU‑dense compute for next‑generation AI models. The ground‑breaking fits into a broader push — backed by multi‑year supplier commitments and much larger capex plans — but raises familiar execution questions around power, permitting and hardware supply.
China unveils five-year push to place computing infrastructure in orbit
Beijing has announced a state-led five-year program, led by its principal aerospace contractor CASC, to move portions of national cloud and edge computing into Earth orbit. The plan arrives as commercial actors (notably a recent SpaceX regulatory filing) and academic teams propose competing orbital compute architectures, intensifying technical, traffic-management, spectrum and governance challenges.

India Eyes $200B in AI Investment Over Two Years
India plans to attract roughly $200 billion of AI-related capital within the next two years, the government says. The strategy centers on a five-tier stack covering applications, models, compute, data center and network build-out, plus energy provisioning to support large-scale deployment.

OpenAI teams with Tata to build large-scale AI data centres in India
OpenAI has entered a strategic collaboration with the Tata Group and Tata Consultancy Services to develop major AI-focused data centre capacity in India, starting with a 100 MW facility with scope to scale to 1 GW. The project implies multi‑billion dollar infrastructure spending and strengthens onshore compute options for AI deployment across Tata’s customer base.

Bell partners with Hypertec to deepen Canada’s sovereign AI infrastructure
Bell and Hypertec announced a Canada-hosted sovereign AI offering that pairs domestic GPU systems with a nationwide data-centre platform. The deal aims to keep sensitive workloads and regulated data inside Canada, shifting procurement and risk models for public sector and enterprise AI deployments.

AI’s financialisation accelerates as tech giants commit $700bn to compute infrastructure
Five major US technology firms are planning roughly $700bn of capital expenditure this year, catalysing a market that treats compute capacity as collateral and spawning a wider set of financing vehicles — from bonds and CMBS to bespoke structured credit — while concentrated demand, permitting snarls and underutilisation risk sharpen credit and regulatory attention.