
OpenAI teams with Tata to build large-scale AI data centres in India
OpenAI and Tata launch large AI data centre project
A new alliance between OpenAI and the Tata Group will fund and build sizeable compute facilities in India aimed at hosting high‑scale AI workloads. The initial build is planned at 100 MW of capacity, with a roadmap that could increase capacity to 1 GW over time.
Tata Consultancy Services will lead the technical delivery and integration work, positioning its services to embed advanced models into enterprise customers’ operations. TCS’s emerging role in high‑density rack integration — visible in contemporary collaborations with hardware vendors that package validated rack designs and on‑site systems integration — is likely to accelerate installation cadence and reduce deployment risk for the first phases.
Rather than a single monolithic campus, the broader 1 GW target appears commercially realistic as a phased, multi‑site program that can be sited across Tata’s existing campuses and partner locations. That delivery model mirrors other recent multi‑site approaches in India that aggregate capacity across several campuses to manage grid, permitting and land constraints while enabling incremental service roll‑outs to enterprise and third‑party clients.
Financial scale is substantial: a facility at the 1 GW target carries an industry cost range in the high single‑digit to low double‑digit billions of dollars. Practical execution will hinge on constrained accelerator supply, high‑bandwidth memory availability, and the architecture choices for racks and interconnects — factors that influence how much of the build is optimised for training versus inference workloads.
Energy provisioning, phased capacity expansion and integration with Tata’s software and systems services are core implementation themes. The partnership will need to coordinate OpenAI’s model and platform requirements with site‑level realities such as substations, transmission upgrades, cooling design and municipal permitting — issues that have extended timetables for similar projects in the region.
- Planned first phase: 100 MW build to host model training and inference.
- Expansion possibility: up to 1 GW, implying a multi‑billion dollar investment executed across phases and likely across multiple sites.
- TCS role: deliver technical construction, rack integration, and enterprise roll‑outs leveraging systems‑integration experience.
The announcement also lands against a broader national push to attract heavy AI‑linked investment and follows other industry moves — including vendor‑led rack reference designs and operator commitments to GPU stacks — that collectively shape procurement, supply and timing. For customers, onshore compute promises lower latency and easier compliance; for suppliers, it creates obvious demand signals for chips, racks, power and cooling capacity.
However, the path from headline capacity to operational clusters faces familiar bottlenecks: multi‑year GPU allocation cycles, permitting and grid upgrades, and the need for anchored commercial commitments to derisk large builds. Those constraints suggest meaningful portions of planned capacity could take 24–36 months to come online depending on site selection and component availability.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

AMD deepens India push with TCS to deploy Helios rack-scale AI infrastructure
AMD and Tata Consultancy Services will roll out AMD’s Helios rack reference design across India in a partnership that packages AMD’s hardware stack with TCS’s local systems‑integration skills, targeting up to 200 megawatts of aggregated AI compute capacity. The program shortens procurement-to-live timelines but faces the same execution risks seen in other large-scale AI builds — municipal permitting, transmission and substation upgrades, chip and packaging supply limits, and the potential for idle capacity if build‑out outpaces verified demand — which could stretch deliveries into a 24–36 month window.

Nvidia deepens India push with VC ties, cloud partners and data‑center support
Nvidia has stepped up engagement in India by partnering with local venture funds, regional cloud and systems providers, and making model and developer tooling available to thousands of startups — moves meant to accelerate India‑specific AI products while anchoring demand for Nvidia hardware. Those commercial ties sit alongside New Delhi’s $200 billion AI investment push and large private data‑center commitments, sharpening near‑term demand for GPUs but raising vendor‑concentration and infrastructure risks.

NTT Global Data Centers to Scale Capacity to 4 GW, Targeting AI Demand
NTT Global Data Centers plans to deploy roughly 4 GW of nameplate IT power across 34 projects within about two years, accelerating a shift to GPU‑dense, high‑power facilities. The program sharpens near‑term pressure on interconnection, transformer and cooling supply chains and forces an energy‑strategy choice—embedded generation, contracted renewables, or hybrid solutions—that will determine usable capacity and local political risk.

OpenAI expands into Indian higher education with campus partnerships
OpenAI has formed collaborations with six Indian universities and education platforms to embed its classroom-focused tools and training into campus workflows, targeting more than 100,000 learners and staff within a year. The push comes as India emerges as one of the largest ChatGPT markets — with roughly 100 million weekly users — and raises fresh questions about pedagogy, governance and competitor responses.

G42 and Cerebras to deliver 8 exaflops of AI compute infrastructure in India
Abu Dhabi’s G42 and U.S. chipmaker Cerebras will install an on‑shore supercomputing system in India providing roughly 8 exaflops of AI processing capacity under Indian hosting and data‑sovereignty rules. The announcement, made at a high‑profile Delhi AI summit that also lifted related infrastructure stocks (an estimated ~$4 billion combined market‑cap gain for listed suppliers), signals strong political and commercial momentum — but delivery hinges on signed supply, land and power agreements, permitting and constrained accelerator allocations.

OpenAI to Scale London Into Major Research Hub
OpenAI is shifting substantial research capacity to London , intensifying competition for UK talent and increasing local compute and infrastructure demand. This move centers safety, reliability, and performance evaluation work for models including Codex and GPT-5.2 , reshaping the regional research landscape.
OpenAI Internal Data Assistant Scales Analytics Across Teams
OpenAI built an internal, natural‑language data assistant that turns prompts into charts, dashboards and written analyses in minutes — a tool two engineers shipped in three months using roughly 70% Codex‑generated code — and which the company now uses broadly to compress analyst workflows. The project both exemplifies and benefits from emerging platform primitives (persistent state, hosted runtimes, Skills) that enable agentic workflows, but realizing the productivity gains at scale requires disciplined data governance, provenance, and runtime safety to avoid errors, leakage, or vendor‑lock‑in.

Nebius to build 240MW AI-focused data centre near Lille, France
Amsterdam-based Nebius will convert a former Bridgestone tyre site in Béthune into an approximately 240 MW AI-focused data centre campus, with phased capacity beginning from late summer and about half expected online by end-2026. The project both reflects and amplifies a market-wide push — by specialist operators and hyperscalers alike — that is heightening competition for GPUs, grid connections and contractor capacity across Europe.