MWC2026: AMD Advances AI for Telco Networks

Mar 02, 2026

Abstract background

As telco operators move from AI experimentation to production and from traditional radio access networks (RAN) to open, virtualized architectures, they face the challenge of making innovation work across their networks at real-world scale. Success requires more than a model or a single layer of infrastructure: It takes an open ecosystem to develop telco-grade AI, software to operationalize it reliably, and efficient compute designed for distributed edge deployments.

At MWC2026 in Barcelona, Spain, AMD displays how it supports open and cross-collaborative industry initiatives like Open Telco AI. And with its products, AMD shows how it delivers right-sized, end-to-end solutions that can carry AI ambitions into production – from enterprise AI software to leadership CPUs, GPUs, networking technologies and adaptive computing.

Accelerating Telco-Grade AI with Open Telco AI

Telco networks are among the world’s most complex, mission-critical systems. Their operators depend on AI to dynamically optimize networks, to automate operations and to improve resiliency without disrupting the live services that billions of people rely on. Yet even as AI models rapidly improve, those gains haven’t consistently translated to telco-specific performance as illustrated by the GSMA’s report that only 16% of generative AI deployments have been on networks.

That’s why AMD, along with AT&T, TensorWave and other telco industry leaders, will participate in Open Telco AI, a new global initiative led by GSMA and announced at MWC to accelerate telco-grade AI models and systems through open collaboration. The initiative is anchored by the launch of open-telco.ai, a new portal designed to bring operators, vendors, researchers, and developers together with shared resources, datasets, tools, and benchmarking. The initiative delivers new open-telco models from AT&T, compute from AMD, and hosting by TensorWave to help scale deployment. In the collaboration, AMD Instinct™ GPUs train the Open Telco AI models, helping turn shared datasets and benchmarks into practical telco-focused models the ecosystem can build on. Paired with the open and rapidly evolving AMD ROCm™ software platform, AMD Instinct GPUs provide an open foundation for training and inference that helps teams iterate quickly as they progress from experimentation to validation.

From Training to Production: Operationalizing Telco AI with the AMD Enterprise AI Suite

Training telco-grade models is a critical milestone, but delivering real impact in live networks depends on a software layer that turns models into repeatable, governed, and scalable services.

The AMD Enterprise AI Suite is designed to help organizations move from AI experimentation to large-scale production by connecting key open-source AI frameworks and generative AI models with an enterprise-ready platform on AMD compute.

The suite brings together production-oriented components designed for teams operating GPU infrastructure at scale, and it spans model serving, validated workflows, governance, and developer environments. It’s built with a Kubernetes-native, container-based approach intended to fit into enterprise DevOps/MLOps practices while supporting security and multiteam governance.

For telco operators, that combination creates a practical path from domain-trained models to production AI services that include network automation and operational intelligence, while keeping the platform open and aligned with enterprise requirements.

Building a Sustainable Edge Foundation with AMD EPYC™ 8005 Server CPUs

As open and virtualized networks scale, challenges include energy costs, resource utilization in constrained edge environments, automation, and long-term scalability. In that context, the model and software layers benefit from an efficient CPU foundation designed for distributed deployments where power, space, and deterministic behavior matter.

Recently announced AMD EPYC™ 8005 Server CPUs are designed for challenging edge environments. They are optimized for telco, with high compute density to support virtual RAN (vRAN) workloads and include compute-intensive Layer 1 processing. The processors target real deployment conditions with support for wide thermal operating ranges, enabling original equipment manufacturers to certify NEBS-compliant platforms for rugged and outdoor telco deployments, as well as small-form-factor systems. All of this helps operators align infrastructure choices with business priorities as they scale commercial vRAN.

Share:

Article By


Related Blogs