
YOFC Commercializes Hollow-Core Fibre to Accelerate AI Compute Networks
Summary: Context, Claims, and System-Level Reality
At MWC Barcelona 2026, YOFC introduced a commercial hollow‑core optical fibre and an accompanying all‑optical product stack aimed at latency‑sensitive AI compute networks. The company paired hollow‑core fibre with multi‑core cabling and high‑rate transceivers (400G/800G support) and positioned the technology as production‑ready rather than an experimental prototype.
YOFC publicized concrete physical‑layer numbers: roughly a 31% propagation‑delay reduction compared with conventional solid‑glass fibre, approximately 1,000x lower nonlinear interactions that permit higher launch powers and throughput, and attenuation below 0.1 dB/km. Those metrics map directly to longer links with less amplification and lower per‑bit energy in optical transport segments used for distributed model training and inter‑data‑center fabrics.
At the same event, other vendors highlighted complementary — and in some cases overlapping — pathways to AI‑ready networks. Huawei showcased an AI‑aware hardware and software suite that emphasizes analytics, automation and cross‑layer orchestration, citing fault localization to within 10 m, simulated ~20% reach improvements via modeling, and operational energy savings (through controls and hibernation) of about 40%. Vendors such as ZTE, Cisco and a consortium led by NVIDIA stressed rack density, switching silicon, telemetry and orchestration as necessary enablers of low‑latency AI services.
The juxtaposition highlights an important point: YOFC's hollow‑core innovation directly attacks propagation delay and nonlinear limits at the physical layer, but end‑to‑end latency and energy outcomes claimed by operators depend on coordinated upgrades across access, transport, switching and compute placement. Huawei's public SLA contours (1–5 ms targets across metro and national zones) show how system design, control planes and edge placement can produce similar user‑facing latency goals without changing every segment of the fibre plant.
Practical adoption will therefore be gated by interoperability and field validation. Hollow‑core brings measurable optical benefits, but connector/splice losses, bend sensitivity and deployment complexity mean that site‑by‑site retrofits will be slower and more selective than vendor demos imply. Independent end‑to‑end trials in multi‑vendor topologies will be the decisive tests for carriers and hyperscalers evaluating capex tradeoffs.
YOFC framed its solutions around multiple corridors — from submarine‑grade undersea links to terrestrial metro and rack‑to‑rack use cases — which shortens buyer evaluation cycles by offering an integrated procurement narrative. Yet submarine permitting, cross‑border regulation and cable‑supply logistics remain material hurdles for rapid intercontinental rolls.
For cloud providers and telcos, the announcement reframes procurement priorities: physical‑layer upgrades that unlock measurable latency and energy savings now compete more directly with packet‑layer and orchestration investments. For startups and investors, this widens where value accrues in the AI stack toward fibre manufacturers, optical‑component integrators and systems integrators able to coordinate undersea, metro and campus upgrades.
In sum, YOFC's commercial hollow‑core fibre is a significant material advance with clear quantitative claims, but its real‑world impact depends on coordinated system upgrades, supply security, and rigorous multi‑vendor field trials that validate end‑to‑end latency and energy improvements.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

Huawei Unveils Next‑Gen Optical Upgrades to Power AI Networks
At MWC Barcelona Huawei unveiled an optical portfolio claiming substantial latency, reach and energy gains for AI‑centric services, while concurrent vendor announcements at the show (Cisco, ZTE) and consortium efforts (NVIDIA‑led) highlight that these outcomes require multi‑vendor, standards‑aligned validation. Huawei quoted ~40% average energy reduction, ~20% transmission distance gain and sub‑5ms system latency targets, but realization depends on end‑to‑end integration across access, transport and compute.

AI’s financialisation accelerates as tech giants commit $700bn to compute infrastructure
Five major US technology firms are planning roughly $700bn of capital expenditure this year, catalysing a market that treats compute capacity as collateral and spawning a wider set of financing vehicles — from bonds and CMBS to bespoke structured credit — while concentrated demand, permitting snarls and underutilisation risk sharpen credit and regulatory attention.

ZTE Unveils Full‑Stack AI Networking and Devices at MWC Barcelona 2026
ZTE presented an end‑to‑end AI networking and device portfolio at MWC Barcelona that bundles autonomous network software, high‑capacity wireless prototypes, and rack‑scale AI compute. Industry signals from an NVIDIA‑anchored consortium and a Samsung validation demo underline competing technical paths — reference stacks and operator‑led pilots will determine whether ZTE’s lab claims translate into commercial contracts.
Cisco launches Silicon One G300 and liquid-cooled N9000/8000 systems to accelerate AI data centers
Cisco introduced the Silicon One G300 switching silicon and high‑density N9000/8000 platforms — with liquid‑cooled options, denser optics and unified fabric management — and paired the hardware roadmap with expanded AI governance, observability and automation capabilities to make large AI deployments more efficient and secure. The combined hardware and software push targets higher GPU utilization, shorter job times, energy savings and operational controls for AI agent and model risk in production.

IBM expands NVIDIA collaboration to accelerate GPU-native enterprise AI
At GTC 2026 IBM and NVIDIA broadened a partnership to push GPU-native analytics, faster multi‑modal document ingestion and validated, residency-aware on‑prem/cloud stacks for regulated customers. IBM published PoC gains with Nestlé (15→3 minute refresh; ~83% cost cut; ~30× price‑performance) and said Blackwell Ultra GPUs will be offered on IBM Cloud in early Q2 2026 — a practical route to production, albeit one that sits alongside alternative vendor approaches (e.g., Cisco’s DPU/network-focused stacks) and industry timing risks tied to supply and staged shipments.

Alphabet accelerates India connectivity with new transoceanic fiber plans
Alphabet announced new undersea fiber routes to deepen links between India and overseas markets and framed the move as part of a broader push to support AI and cloud growth in India. CEO Sundar Pichai endorsed India’s AI ambitions at a high-profile summit where government leaders and industry chiefs convened to highlight the country’s digital infrastructure needs.

Eridu Raises $200M Series A to Re-architect AI Networking
Hardware startup Eridu closed an oversubscribed $200M Series A (bringing total capital to $230M) to build networking silicon and systems optimized for large AI clusters. The raise arrives amid parallel capital flows to photonics and fabric vendors (Ayar, Astera, Mesh) and highlights a near-term tension between electrical on‑chip/network-on-die approaches and co‑packaged optics — adoption will be driven by yield, validation timelines, and supply‑chain posture.

Corning Secures Roughly $6B from Meta as AI Data‑center Buildout Spurs Massive Fiber Demand
Meta will purchase up to $6 billion of Corning fiber‑optic cable through 2030 to support an expanding fleet of AI data centers, accelerating Corning’s plant expansions and signaling broader vendor-capital alignment across the AI infrastructure supply chain. Similar strategic financing and equity ties elsewhere in the ecosystem — for example, Nvidia’s investment in CoreWeave — underscore how chipmakers, hyperscalers and suppliers are locking in downstream capacity while raising execution, power and regulatory considerations.