Cisco launches Silicon One G300 and liquid-cooled N9000/8000 systems to accelerate AI data centers
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

IBM expands NVIDIA collaboration to accelerate GPU-native enterprise AI
At GTC 2026 IBM and NVIDIA broadened a partnership to push GPU-native analytics, faster multi‑modal document ingestion and validated, residency-aware on‑prem/cloud stacks for regulated customers. IBM published PoC gains with Nestlé (15→3 minute refresh; ~83% cost cut; ~30× price‑performance) and said Blackwell Ultra GPUs will be offered on IBM Cloud in early Q2 2026 — a practical route to production, albeit one that sits alongside alternative vendor approaches (e.g., Cisco’s DPU/network-focused stacks) and industry timing risks tied to supply and staged shipments.

Cisco pushes AgenticOps deeper into networking, security and observability
Cisco announced an expanded set of agent-driven features under its AgenticOps umbrella, extending autonomous troubleshooting, policy recommendations, and monitoring across campus, data center, service provider and firewall domains. The vendor also described an internal initiative, Outshift, that aims to add semantic layers for agent intent and shared context so multi-agent automations can coordinate reliably; staged rollouts and Splunk integrations are scheduled through 2026.
Gimlet Labs Raises $80M to Orchestrate Multi‑Silicon Inference
Gimlet Labs closed an $80M Series A led by Menlo Ventures to commercialize a multi‑silicon inference cloud that shards agentic workloads across heterogeneous hardware. The raise and product launch sit inside a broader wave of infrastructure bets — from edge runtimes to stateful AI platforms — that collectively signal software orchestration is becoming the primary lever for lowering inference cost and shaping procurement.
NVIDIA Leans on Groq to Expand AI-Accelerator Capacity
NVIDIA has struck a commercial pact with Groq to relieve near-term inference accelerator capacity constraints and diversify silicon sourcing; reporting around the arrangement varies (some outlets cite a large multibillion-dollar licensing/priority package while others stress non‑binding frameworks). The deal buys time for NVIDIA’s roadmap but also accelerates a structural shift toward blended, multi‑vendor accelerator fleets that raise integration, validation and regulatory questions for hyperscalers and enterprises.

NTT Global Data Centers to Scale Capacity to 4 GW, Targeting AI Demand
NTT Global Data Centers plans to deploy roughly 4 GW of nameplate IT power across 34 projects within about two years, accelerating a shift to GPU‑dense, high‑power facilities. The program sharpens near‑term pressure on interconnection, transformer and cooling supply chains and forces an energy‑strategy choice—embedded generation, contracted renewables, or hybrid solutions—that will determine usable capacity and local political risk.
Meta accelerates custom silicon push with four MTIA accelerators
Meta detailed a multi‑generation MTIA accelerator program—announcing four new chips (MTIA 300 in production; MTIA 450 with ~2x HBM) and partnerships with Broadcom and TSMC—while simultaneously locking large third‑party procurements that create a staged, hybrid deployment path. The combination compresses hardware iteration cadence, hedges foundry and packaging risks, and reshapes vendor leverage across hyperscaler AI infrastructure.

Akash Systems Debuts Diamond-Cooled AI Servers with AMD Instinct MI350X
Akash Systems launched production Diamond Cooled AI servers built with AMD Instinct MI350X GPUs and manufactured by MiTAC , backed by a reported $300M initial order. The systems claim multi‑percent efficiency and throughput gains that could shift data center density economics, but delivery timing and realized ROI will hinge on component supply, packaging capacity and site‑level integration.

Cisco CEO warns AI surge will create major winners and widespread disruption
Cisco’s CEO says the current rush into AI will produce a small set of dominant firms and significant market casualties, urging firms and governments to prepare for workforce shifts and rising cyber risk. He pointed to tangible infrastructure demand — more than a billion pounds of orders in a quarter — while calling for collaborative policy work and stronger defensive architectures.