
Brookfield forms Radiant through Ori acquisition to lease AI chips
Brookfield converts Ori assets into a leased-AI-accelerator business amid a multi-track surge in capital-for-compute
Brookfield Asset Management has acquired cloud-compute specialist Ori and reorganized the team into a new unit called Radiant, positioning it to lease access to specialized processors for large-scale AI training and inference rather than simply buying software or providing orchestration services.
The transaction bundles Ori’s engineering talent, orchestration software and deployment know-how into a capital-intensive delivery model that purchases silicon and associated systems at scale and supplies customers on a lease or managed-capacity basis, targeting governments, hyperscalers and enterprises that want predictable, rapid access to accelerators.
Brookfield did not disclose financial terms; the company emphasized operational capability and recurring contractual revenue as the core of Radiant’s thesis: pooled purchasing power, integrated deployment and contractually predictable supply and service levels.
Radiant’s formation occurs against a broader market backdrop in which institutional capital is being paired with compute capacity in multiple ways. Reports of an Apollo-linked financing to acquire Nvidia accelerators for xAI illustrate sponsor‑and‑credit‑led fleet financings, while recent rounds backing inference‑focused hardware startups — notably a reported roughly $250 million raise for Dutch startup Axelera led by Innovation Industries with participation from BlackRock and Samsung Catalyst — show venture and asset managers also funding chip design and early manufacturing.
Those two approaches reflect different commercial bets and risk profiles. Fleet‑leasing vehicles concentrate counterparty and vendor risk around a small set of accelerator suppliers (often Nvidia) and hinge on collateral, resale and remarketing mechanics for lenders; by contrast, ASIC/design plays like Axelera expose backers to product‑validation, foundry and packaging risk but offer potential differentiation on energy and cost per inference if manufacturability and software integration succeed.
Across both tracks, non‑silicon elements — compilers, runtimes, orchestration, benchmarks and system integration — are decisive for customer adoption. The Axelera example reinforces that investors expect hardware startups to pair chips with robust software and repeatable manufacturing milestones; Radiant’s model shows investors can also underwrite access to commodity and specialty accelerators if utilization, supply agreements and remarketing paths are credible.
Key execution issues for Radiant include securing durable supply relationships with chip vendors, establishing lender‑friendly collateral and resale protections, sustaining high utilization across deployments and avoiding single‑tenant concentration that could imperil cashflows and asset recovery.
Supply‑chain bottlenecks — foundry node access, advanced packaging throughput and test/assembly capacity — will shape timing and cost for both custom ASICs and rack‑level accelerator fleets, often stretching programs across multiple quarters even when designs are production‑ready.
Regulatory and export‑control complexity is another constraint where sovereign or government customers are involved, raising legal and logistical frictions for cross‑border leasing or sales of accelerators and potentially requiring additional compliance and governance resources.
The net effect is a bifurcating market: some customers will prefer leased, asset‑backed capacity for predictable access and budgeted operating expense, while others will invest in differentiated silicon plus software stacks or remain with public‑cloud ecosystems for integration, data governance and additional managed services. Institutional capital is therefore moving both downstream into physical compute operations and upstream into chip design — and investors will evaluate deals on whether they solve procurement frictions, enable durable differentiation, or simply concentrate recoverability risk.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

Apollo Nears $3.4 Billion Loan to Finance AI Chip Fund for xAI
Apollo Global Management is reportedly finalizing roughly $3.4 billion in financing for an investment vehicle that will purchase advanced Nvidia chips to lease to Elon Musk’s xAI. The arrangement comes as xAI pursues a broader set of capital and strategic partnerships — including a reported ~$20 billion financing round with a roughly $2 billion commitment from Tesla — underscoring a trend of tying outside capital directly to compute supply for model builders.

Broadcom Forecasts >$100B AI Chip Revenue; Large Orders From Anthropic, OpenAI
Broadcom projected more than $100 billion in AI chip sales by 2027, citing multi‑gigawatt commitments to Anthropic (roughly 3 GW) and an over‑1 GW shipment to OpenAI, while raising near‑term guidance and authorizing up to $10 billion in buybacks. Upstream signals from ASML and TSMC plus a bullish Jefferies demand model lend credibility to the addressable market — but substrate, packaging/test bottlenecks and the enduring strength of the NVIDIA software ecosystem create meaningful execution and timing risk.

Axelera AI secures $250M+ to scale power-efficient AI chips
Axelera AI closed a financing round topping $250M to push production of power-efficient inference semiconductors, drawing new institutional capital from BlackRock and continued strategic support from Samsung Catalyst. The raise is part of a broader wave of large hardware financings that signal investor appetite for inference-optimized silicon but leaves product validation, foundry access and software maturity as the critical next milestones.
NVIDIA Leans on Groq to Expand AI-Accelerator Capacity
NVIDIA has struck a commercial pact with Groq to relieve near-term inference accelerator capacity constraints and diversify silicon sourcing; reporting around the arrangement varies (some outlets cite a large multibillion-dollar licensing/priority package while others stress non‑binding frameworks). The deal buys time for NVIDIA’s roadmap but also accelerates a structural shift toward blended, multi‑vendor accelerator fleets that raise integration, validation and regulatory questions for hyperscalers and enterprises.

OpenAI’s Cerebras Pact Reorders AI chip leverage
OpenAI agreed commercial access to Cerebras silicon, creating a new procurement axis that reduces single-vendor dependence and accelerates hardware diversification for large model training. Anthropic’s parallel interest in Chinese accelerator capabilities signals that semiconductor access is now both a commercial battleground and a statecraft issue.

Broadcom’s Custom Chip Momentum Raises Competitive Tension but Nvidia’s Lead Persists
Broadcom is turning internal TPU design wins and strong AI revenue into a commercial product push, drawing hyperscaler interest and a reported multibillion‑dollar order from Anthropic. Broader industry signals — rising foundry capex, selective Chinese clearances for NVIDIA H200 shipments, and chip‑vendor investments in downstream capacity — tighten supply dynamics but do not overturn Nvidia’s entrenched software and ecosystem advantages, pointing to a multi‑vendor equilibrium rather than a rapid displacement.
Positron secures $230M to accelerate AI inference memory chips and challenge Nvidia
Positron raised $230 million in a Series B led in part by Qatar’s sovereign wealth fund to scale production of memory-focused chips optimized for AI inference. The funding gives the startup strategic runway amid wider industry investment in memory and packaging innovations, but it must prove efficiency claims, ramp manufacturing, and integrate with software stacks to displace entrenched GPU suppliers.

Broadcom ships stacked-die AI chip to Fujitsu, plans broader data-center rollout
Broadcom has begun shipping a top-to-top stacked-die AI accelerator module to Fujitsu and signals plans to deliver similar modules to large data‑center operators later this year. The move comes as Broadcom reports rapid AI revenue growth and some large design wins, but industry capacity and packaging bottlenecks — and the entrenched GPU software ecosystem — mean wider adoption will depend on supply‑chain and integration execution.