
NVIDIA-backed trial shows AI data centers cut power on demand
Context and chronology
Over a five-day exercise in December 2025, a London facility ran coordinated responses to simulated grid events using control software provided by Emerald AI, in partnership with NVIDIA, National Grid, Nebius and the Electric Power Research Institute. Organizers logged each event, the control signal, and workload continuity to prove that core processing stayed online while power consumption shifted, and they agreed to share anonymized data with regulators and planners. The exercise was designed both to measure discrete, short-duration responses and to test sustained trimming across predictable peak windows, building an auditable dataset intended for industry use and policy review.
Operational performance
Measured outcomes included a top-end reduction of 40% in facility draw and a concentric test where load shrank 30% within 30 seconds, demonstrating both deep and rapid curtailment modes. The site also executed longer-duration trimming, sustaining roughly 10% lower consumption across multi-hour periods linked to predictable spikes such as sporting-event halftimes. Engineers confirmed prioritized workloads continued without observable service disruption by shifting or pausing noncritical tasks, showing that flexibility can be concentrated on fungible compute rather than wholesale service suspension. These operational results framed a proof of concept that software orchestration can deliver measurable, monetizable grid services from GPU-dense facilities.
Market and regulatory implications
Partners described the exercise as a template for larger deployments and pointed to a planned 100MW flexible AI facility in Virginia as the next application of these controls. NVIDIA argued such demand-side tools can relieve some immediate upgrade pressure on networks and accelerate connection approvals, while National Grid representatives said verified flexibility could shorten permitting timelines and lower upfront reinforcement costs. If adopted at scale, the model could create multiple revenue streams — from ancillary services to capacity markets — and alter how operators value site selection, procurement and financing.
Broader sector context and caveats
The trial's promise sits alongside larger industry frictions: rapid AI-capacity builds have produced a wave of new projects whose utilization profiles remain uneven, and some markets have already seen permitting and community resistance that delayed or reshaped developments. Industry monitors estimate roughly $64 billion of U.S. datacenter projects have experienced delays or cancellations tied to zoning and grid concerns, underscoring that operational flexibility alone does not remove planning, financing and supply-chain barriers. Hardware and supply constraints — in packaging, test and specialized accelerator production — plus concentrated financing exposure mean that expected efficiency gains and flexible fleets may arrive slower or less uniformly than vendors claim.
Siting, system-operation trade-offs and policy signals
System operators have stressed a complementary point: very large, mostly inflexible loads complicate balancing and can raise costs for all consumers, prompting suggestions to site the largest campuses where curtailed renewable output can be absorbed and to use locational signals or conditional connection agreements. In practice, that could put new commercial pressure on operators to accept stricter interconnection terms, invest in on-site storage, or contract time‑aligned renewables to qualify for connection or favorable tariffs. The London trial shows what operators can offer in response, but realizing the full value requires regulatory acceptance of demand‑side performance, clear measurement standards, and contractual pathways to monetize curtailements.
Implications for markets and timelines
If a meaningful share of GPU capacity becomes reliably dispatchable, intraday price volatility and some peak capacity procurement needs would likely fall, pressuring peaker-plant economics and shifting investment toward orchestration and software-defined flexibility. However, this transition depends on aligning hardware supply, verified operational performance, and public‑sector rules: without that alignment the financial benefits may be uneven and developers could still face long permitting and underwriting timelines. The London results are a necessary proof point, but not by themselves a guarantee of rapid, systemwide transformation.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you
Phaidra's Nvidia-Backed Cooling Strategy Targets Data centers
Phaidra announced collaborations with Nvidia, CoreWeave and Applied Digital to pilot a telemetry-driven, power-as-early-warning cooling workflow that aims to cut wasted utility use while preserving usable GPU compute hours. Placed alongside parallel industry moves — server-level diamond cooling from Akash/AMD/MiTAC and NVIDIA's design work with AtkinsRéalis on firm power for AI campuses — the effort highlights a short-to-long-term spectrum of fixes (software controls, hardware thermal modules, and power-supply engineering) operators will combine to raise usable capacity.

NTT Global Data Centers to Scale Capacity to 4 GW, Targeting AI Demand
NTT Global Data Centers plans to deploy roughly 4 GW of nameplate IT power across 34 projects within about two years, accelerating a shift to GPU‑dense, high‑power facilities. The program sharpens near‑term pressure on interconnection, transformer and cooling supply chains and forces an energy‑strategy choice—embedded generation, contracted renewables, or hybrid solutions—that will determine usable capacity and local political risk.

Nvidia CEO Argues AI Expansion Will Cut Energy Costs Over Time
Nvidia’s CEO says the current surge in AI compute will raise electricity use in the near term but argues that hardware, software and grid-level innovations will lower per-unit energy and compute costs over time. The claim hinges on sustained investment, faster deployment of efficient accelerators, and coordinated grid upgrades amid risks from permitting, supply‑chain constraints and uneven demand.

Global AI datacenter boom risks oversupply and wasted capacity
Rapid expansion of GPU‑heavy datacenter capacity for generative AI is outpacing measurable production demand and colliding with local permitting, financing and grid constraints. Absent tighter demand validation, better utilization mechanisms and coordinated grid planning, the sector faces lower returns, schedule risk and heightened public pushback.

AtkinsRéalis and NVIDIA Team to Design Nuclear-Powered AI Factories
AtkinsRéalis and NVIDIA launched a technical collaboration to produce nuclear‑aware reference designs, pilot digital twins and integration pathways that pair continuous baseload generation with GPU‑dense AI campuses. The work is intended to create vendor‑grade integration patterns and simulation artifacts (not immediate reactor builds) that address grid deliverability, permitting and procurement frictions while sitting alongside faster alternatives such as captive gas and validated, vendor‑led compute stacks.

Trump's Rate Payer Protection Pledge forces techs to fund data-center power
At the State of the Union President Trump unveiled a voluntary "Rate Payer Protection Pledge" asking hyperscalers to underwrite incremental electricity and grid upgrade costs tied to AI data centers. The White House paired federal land siting and a proposed ~$15 billion PJM-backed auction with public pressure, prompting mixed industry reactions, PJM pushback, and renewed debate over voluntary versus binding cost-allocation rules.

Applied Digital and Partners Commit 1.2GW Gas Plants to Power AI Campuses
Major data‑centre operators are building on‑site gas generation to meet AI power demand, with a reported $2.4B project delivering 1.2 GW . That move signals accelerating direct fuel deals, higher natural‑gas volumes, and greater strain on emissions targets.

AVK and Pure Data Centres Deploy Islanded Microgrid for Dublin Data Center
A Dublin data centre has begun operating on an islanded microgrid to circumvent slow grid connections and satisfy tight Irish rules for dispatchable, largely domestic-renewable supply. The 110 MW IT-capable site — backed by modular, fuel‑flexible generation and up to 20 MW of battery dispatch — exemplifies a broader, multi‑scale trend from small renewable‑paired compute pilots to large fuel‑backed islanded facilities, raising trade‑offs between rapid delivery, emissions and financing risk.