AI Everywhere, for Everyone
The possibilities that AI enables are no longer confined to a single environment or domain. Today’s AI workloads have moved beyond the cloud into end-user devices and on-site systems. For partners, this shift creates a wealth of possibilities and opportunity; customers now seek AI configurations that scale, integrate with their existing systems, and deliver real outcomes wherever work is being done – at the desk, at the edge, or in the data center.
With a host of new products announced at CES 2026, AMD continues its mission of delivering an open ecosystem, enabling AI everywhere, for everyone. From rack-scale infrastructure in the data center to embedded systems to on-device AI, the AMD portfolio of AI products is growing and offering customers more flexibility and performance than ever.
Cloud and the Data Center: The Foundation for AI Scalability
Large scale AI begins in the data center; training frontier models, inference, and organizing agentic workflows demand infrastructure at scale that requires more than just raw compute performance. Customers require platforms that combine high-performance acceleration, powerful CPU utilization, and networking that links everything together for an optimized outcome.
AMD data center portfolio cornerstones including AMD EPYC™ server CPUs, AMD Instinct™ GPUs, and AMD Pensando™ DPUs enable partners to design and deploy AI infrastructure that supports even the most intensive customer enterprise AI workloads and hyperscale deployments.
Through “Helios,” an all-in-one rack-scale AI reference platform built on next-generation AMD Instinct™ MI400 Series GPUs, AMD EPYC server CPUs, and AMD Pensando NICs, partners can outline a path toward revolutionary performance, bandwidth, and energy efficiency while helping to reduce integration friction and cost and accelerating time to market for customers.
“Helios” is a clear path to next-generation AI infrastructure that’s built using open standards and proven product portfolios.
Physical AI: Bringing Intelligence Into the Real World
Where cloud provides scale and coordination, certain AI workloads must operate where data is generated and decisions are being made. Physical AI introduces unique challenges for customers that include latency, reliability, and power efficiency in restrictive environments where systems operate continuously and long-term stability is paramount.
Workloads in such environments contain use cases ranging from industrial automation and robotics through to smart infrastructure and autonomous systems. AMD addresses customer requirements in these areas via its Embedded portfolio, including new AMD Ryzen™ AI Embedded processors that are designed specifically for physical AI deployments.
Delivering efficient AI acceleration alongside x86 compatibility, AMD partners can build their customers intelligent configurations that integrate seamlessly with existing systems that meet the real-world requirements and constraints of embedded environments. Models can be trained or refined centrally before then being deployed in embedded platforms on site, helping to unlock new realms of productivity and real-time compute.
On-Device AI: Where Work Happens
AI is becoming a core part of the end-user experience, from productivity tasks and analysis to content creation; on-device AI enables faster responses, privacy, and greater efficiency by bringing inference closer to the end user.
At CES 2026, AMD expanded its AI PC portfolio with a host of new AMD Ryzen™ AI platforms including AMD Ryzen™ AI 400 Series processors, AMD Ryzen™ AI PRO 400 Series processors, and new AMD Ryzen™ AI Max processor SKUs. These platforms deliver up to a 60 TOPs NPU1 and full AMD ROCm™ software platform support for a seamless cloud-to-client AI scaling link, letting users run AI workloads locally while maintaining a connection to cloud resources when additional resources are needed.
Help customers modernize their fleet with smarter, more responsive systems that are AI-ready and bring new realms of performance, security, and manageability to their operations.
One Open Ecosystem, Limitless Opportunities
Across cloud infrastructure, physical AI systems, and end-user devices, AI is open. The strategy behind both existing and upcoming AMD products is to give partners choice; choice in how to design systems, integrate software, and deliver solutions and value to customers.
By aligning innovations in hardware with open standards when it comes to software, AMD is enabling partners to address an increasingly diverse and demanding range of AI use cases without locking customers down when it comes to architecture.
Whether you’re supporting large-scale AI deployments, helping customers to embed intelligence into physical systems, or enabling a fleet refresh to prepare a workforce for on-device AI workflows, leverage the portfolio of AMD products to help customers embrace the power and possibilities of AI everywhere.
To learn more about AMD and how you can support aspiring AI customers, visit amd.com or speak to your AMD representative.
AMD Arena
Enhance your AMD product knowledge with training on AMD Ryzen™ PRO, AMD EPYC™, AMD Instinct™, and more.
Subscribe
Get monthly updates on AMD’s latest products, training resources, and Meet the Experts webinars.
Related Articles
Related Training Courses
Related Webinars
Footnotes
- Trillions of Operations per Second (TOPS) for an AMD Ryzen processor is the maximum number of operations per second that can be executed in an optimal scenario and may not be typical. TOPS may vary based on several factors, including the specific system configuration, AI model, and software version. GD-243.
- Trillions of Operations per Second (TOPS) for an AMD Ryzen processor is the maximum number of operations per second that can be executed in an optimal scenario and may not be typical. TOPS may vary based on several factors, including the specific system configuration, AI model, and software version. GD-243.