GSMA Launches Open Telco AI to Build Telco-Grade Models and Tooling
Context and Chronology
The GSMA has created a collaborative platform called Open Telco AI to coordinate model development, datasets and compute targeted at telecom operators and vendors. The portal centralizes open telco models, curated telecom datasets and access to training and inference resources, and links to developer engagement programmes including a high‑participation troubleshooting challenge that drew over 1,000 registrations. GSMA positioned the project to address a measurable shortfall in operator-grade solutions: only 16% of GenAI deployments had been applied to network operations, a gap driving vendor and operator interest in specialized tooling. For registration and resources the initiative points users to GSMA.com/open-telco-ai, which will host the model library and leaderboard.
What was released
Founding contributors named in the launch include AT&T and AMD, with compute support routed through AMD hardware and partner TensorWave. The portal will publish multiple open-weight telco models from contributors, plus a library of knowledge graphs, embeddings and fine‑tuning datasets submitted by universities and industry groups. A public leaderboard will report performance across seven telecom-specific benchmarks, enabling repeatable evaluation and submission from local environments. Community activities — competitions, challenges and shared pipelines for synthetic data generation — are integral to the release plan and to seeding reproducible baselines.
Broader industry context and parallel efforts
Separately, an industry group anchored by NVIDIA has convened operators and vendors around a complementary agenda that places programmable edge compute, low-latency inference and orchestration primitives at the centre of next‑generation radio architectures. Participants publicly associated with that effort include Nokia, SoftBank, and T‑Mobile US, and its emphasis is on reference implementations that embed accelerators and telemetry pipelines into radio and edge stacks rather than on an open model library and benchmarks alone.
Strategic implications and synthesis
GSMA’s benchmark-led, model-and-data-centric approach and the NVIDIA‑anchored architecture consortium represent two industry responses to the same operational pressures: the need for reproducible model evaluation, faster operator trials, and cloud‑to‑edge inference. They can be complementary — GSMA’s leaderboards could supply the objective metrics that architecture initiatives use to select accelerators and orchestration patterns — but they also risk divergence: if benchmark definitions or datasets favour particular hardware profiles or telemetry interfaces, one track may lock in different de facto standards than the other. Technical and regulatory constraints — deterministic latency, spectrum safety, and auditability — will require staged field trials and joint lab validations before many operator deployments proceed.
For operators, the immediate choice is whether to participate in one or both tracks to influence benchmarks and reference implementations. For semiconductor and cloud partners, the split creates multiple avenues to capture value: by contributing validated models and compute (GSMA) or by supplying integrated accelerator-and-orchestration stacks (architecture consortia). Vendors that rely on opaque, closed stacks face growing pressure from both sides to demonstrate reproducible real-world performance against operator-focused metrics.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

SoftBank Corp. Pursues Telco AI Cloud to Become AI Infrastructure Provider
SoftBank introduced Telco AI Cloud, pairing a GPU cloud, AI‑RAN MEC and the Infrinia AI Cloud OS with the AITRAS orchestrator to push distributed, low‑latency inference across operator infrastructure. The initiative arrives alongside industry efforts — GSMA’s Open Telco AI (model/benchmark‑led) and an NVIDIA‑anchored accelerator/telemetry track — creating a live contest over whether operator AI stacks will be model‑centric or hardware/telemetry‑centric.

Nvidia mobilizes $26B to launch open-weight model program
Nvidia plans a multi-year, $26 billion program to develop and publish open-weight models, and concurrently released Nemotron 3 Super , a 128‑billion‑parameter model. The move tightens hardware-model coupling, amplifies demand for Nvidia systems, and reshapes competitive dynamics between US cloud providers and open-weight ecosystems.

OpenAI teams with Tata to build large-scale AI data centres in India
OpenAI has entered a strategic collaboration with the Tata Group and Tata Consultancy Services to develop major AI-focused data centre capacity in India, starting with a 100 MW facility with scope to scale to 1 GW. The project implies multi‑billion dollar infrastructure spending and strengthens onshore compute options for AI deployment across Tata’s customer base.

Chinese tech firms ratchet up AI model launches, shifting the battleground from research to scale and distribution
Chinese technology companies are accelerating public releases of advanced generative and agent-capable models while pairing permissive access and low-cost distribution with platform hooks that convert usage into commerce. That commercial emphasis—backed by rising developer telemetry for non‑Western models and stronger upstream demand for specialized compute—reshapes competition around reach, infrastructure and governance rather than raw benchmark supremacy.

Google launches Gemini Mac beta to pressure OpenAI and Anthropic
Google has begun a private beta of a native Gemini app for Mac, recruiting nonemployee testers to surface bugs and shape the product before a broader release. The Mac pilot is one piece of a wider productization push — from Workspace integrations and a Gemini 3.1 Pro preview to code references for agentic in‑app automation — that sharpens competition with OpenAI and Anthropic PBC while increasing regulatory and developer scrutiny.

ZTE Unveils Full‑Stack AI Networking and Devices at MWC Barcelona 2026
ZTE presented an end‑to‑end AI networking and device portfolio at MWC Barcelona that bundles autonomous network software, high‑capacity wireless prototypes, and rack‑scale AI compute. Industry signals from an NVIDIA‑anchored consortium and a Samsung validation demo underline competing technical paths — reference stacks and operator‑led pilots will determine whether ZTE’s lab claims translate into commercial contracts.

Huawei Cloud Unveils HCF and CodeArts, Launches Industry AI Foundry
At MWC26 in Barcelona Huawei Cloud introduced an Industry AI Foundry and two products — HCF and CodeArts — aimed at accelerating regulated hybrid deployments and developer productivity, with GA targeted for H2 2026. The moves sit alongside broader MWC activity (notably ZTE prototypes, an NVIDIA‑led consortium and Samsung demos) that together signal an industry split between vendor‑bundled stacks and consortium/benchmark‑driven reference approaches.
Commotion launches AI OS with NVIDIA Nemotron to operationalize enterprise AI
Commotion unveiled an AI OS built with NVIDIA Nemotron and backed by Tata Communications , aiming to turn copilots into governed, autonomous "AI Workers". Early deployments report 30–40% autonomous resolution , faster interactions, and enterprise-grade governance.