
Tune Talk completes cloud-native core with Mavenir
Tune Talk shifts its mobile backbone to cloud-native software
Tune Talk announced a transition of its core network and business support functions onto a software-first stack developed with Mavenir, moving away from hardware-tethered systems toward containerised, orchestrated network functions. The operator says the redesign replaces tightly coupled physical appliances with modular software pieces that can be updated and scaled independently, enabling faster product iterations and more automated operations. Executing a full cloud-centric core also opens practical paths for features such as zero-touch provisioning, continuous delivery of service software, and analytics-driven network controls that rely on elastic compute. For smaller or digitally native carriers, the advantage lies in trimming capital cycles and converting heavy upfront infrastructure spending into more flexible operational expenses. The shift reduces dependence on legacy vendor bilaterals and changes procurement timelines: software releases, not hardware shipments, become the gating factor for new capabilities. However, the technical trade-offs are material — cloud stacks must meet telecom-grade latency and resilience requirements, and choices between public, private or hybrid clouds affect regulatory fit. Regulators and enterprises will scrutinise where user data and signaling workloads run, so deployment geography and contractual controls matter as much as the software itself. In practice, cloud-native cores act as an enabler for advanced automation and AI tooling, but they do not deliver insights or monetisation automatically; robust data pipelines and engineering investment are prerequisites. The announcement is less a one-off migration and more a datapoint in a months-long trend where ASEAN operators experiment with software-led architectures and open interfaces. Competing incumbents with legacy estates face longer, riskier transition paths and may adopt hybrid approaches that preserve critical on-prem equipment while federating new functions to cloud partners. In the near term, expect faster pilot cycles from challengers, an uptick in demand for network-software talent, and renewed commercial leverage for vendors that can supply validated cloud-native network functions. The strategic question for buyers and regulators is how to balance agility gains against systemic risk to national communications infrastructure and data governance.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you
Samsung tests AI-native vRAN with NVIDIA compute at MWC
Samsung demonstrated an AI-integrated virtual RAN using NVIDIA accelerated processors at MWC 2026, validating AI workloads running alongside radio functions. The showcase tightens the link between cloud patterns and telecom stacks and signals cloud-style compute moving closer to live networks.

NVIDIA-led consortium targets AI-native 6G architecture
A consortium led by NVIDIA and several carriers aims to bake intelligence into 6G network design, shifting radio control toward software and specialized accelerators. This move accelerates demand for telco-grade AI silicon, cloud-edge orchestration, and standards influence that could reshape procurement and vendor leverage.

SoftBank Corp. Pursues Telco AI Cloud to Become AI Infrastructure Provider
SoftBank introduced Telco AI Cloud, pairing a GPU cloud, AI‑RAN MEC and the Infrinia AI Cloud OS with the AITRAS orchestrator to push distributed, low‑latency inference across operator infrastructure. The initiative arrives alongside industry efforts — GSMA’s Open Telco AI (model/benchmark‑led) and an NVIDIA‑anchored accelerator/telemetry track — creating a live contest over whether operator AI stacks will be model‑centric or hardware/telemetry‑centric.
Native secures $42M to build cloud policy enforcement platform
Cloud-security startup Native closed $42M to accelerate a provider-native policy enforcement layer across AWS , Azure , GCP , and OCI. The round includes a $31M Series A lead and a board addition, positioning the company to scale engineering headcount and push enterprises from detection to preventive control. The raise arrives as the market bifurcates: some vendors double down on provider-native orchestration while others (e.g., Cylake) pursue on‑prem telemetry retention, highlighting a split in enterprise procurement demands.

Tencent Moves to Deepen Middle East Cloud Presence with New Data‑centre Zones
Tencent will broaden its cloud infrastructure across the Middle East and other regions, aiming to add multiple availability zones within the next 12–18 months as it pushes international growth beyond gaming. The move positions Tencent against major Western cloud providers amid rising regional IT investment and a growing AI infrastructure build‑out.
Commotion launches AI OS with NVIDIA Nemotron to operationalize enterprise AI
Commotion unveiled an AI OS built with NVIDIA Nemotron and backed by Tata Communications , aiming to turn copilots into governed, autonomous "AI Workers". Early deployments report 30–40% autonomous resolution , faster interactions, and enterprise-grade governance.

uCloudlink outlines connectivity, IoT and pet-tech revenue play
uCloudlink unveiled a terminal-side roadmap at MWC that packages CloudSIM and connection orchestration to target legacy handsets, IoT fleets, and the pet-tech subscription market. The plan pressures MVNO economics, expands managed-device addressability, and creates immediate TAM optics for pet subscription services.

Coinerella Rebuilds Platform on European Cloud Providers
Coinerella migrated its stack to European providers to regain data locality and lower infrastructure spend while accepting more operational responsibility. The shift exposes integration gaps, tooling shortfalls, and a clear path for platform engineering and sovereign-cloud economics to scale.