Match AMD Hardware to Your AI Needs
Get an overview of the AMD portfolio for AI, starting with AMD EPYC server CPUs, and explore how to use different types of hardware throughout your AI workflow.
No matter the size or scale of your AI deployments, AMD EPYC server CPUs give you a high-performance, energy-efficient foundation for enterprise AI and general-purpose workloads.
5th Gen AMD EPYC server CPUs stand up to the demands of AI with options that include high core counts or high frequency, plenty of memory and I/O bandwidth, and support for AVX-512 instructions. Built-in security technologies from AMD Infinity Guard help you keep data protected all the way to the silicon layer.7
To build an AI ready data center, you’ll need a foundation of general-purpose compute designed for security, augmented with GPUs as needed to fit your performance and workload requirements. Here’s how to optimize your next data center as an AI-capable multi-tasking powerhouse.
There's a limit to the space and power in your data center. By replacing old servers with new, high-density CPUs, you can consolidate to fewer servers, reduce related energy consumption, and free up space for AI.
Move from 2020-era Intel® “Cascade Lake” servers to 5th Gen AMD EPYC CPU-powered servers.
Fourteen AMD EPYC 9965 CPU-based servers can deliver the same integer performance as 100 old servers running Intel Xeon Platinum 8280 CPUs.
5th Generation AMD EPYC 9965 CPUs outperform the latest Intel Xeon 6 6980P CPUs with “performance cores.”
Many inference workloads run on CPUs and don’t need special accelerator hardware. If you plan to run small or medium models or have occasional AI tasks, high-core count 5th Gen EPYC Server CPUs may meet your performance requirements.
AMD EPYC 9965 CPUs outperform Intel Xeon 6 6980P CPUs on TPCxAI.
You may need dedicated AI acceleration for training, inference on large models, large-scale deployments, or low latency use cases. Start with high frequency AMD EPYC 9005 server CPUs as a host CPU to take advantage of high core frequency and large memory capacity. Add GPUs like AMD Instinct™ accelerators, available in a PCIe form factor.
Data protection must be a consideration in every AI deployment. AMD EPYC server CPUs are designed with security in mind to be resistant to many sophisticated attacks. Built-in at the silicon level, AMD Infinity Guard7 helps defend against internal and external threats to keep your data safe.
Make sure you can quickly scale with a flexible AI infrastructure that has the right combination of on-premises and cloud resources. You can find AMD EPYC server CPUs across hundreds of hardware options and more than a thousand public cloud instances.
Before investing in AI hardware, data center architects should assess their AI workloads and performance requirements. In some cases, general-purpose AMD EPYC server CPUs may provide enough performance for inference, avoiding the need to purchase GPUs.
In general, AMD EPYC server CPUs deliver enough performance for models up to 20 billion parameters. This includes many popular large language models (LLMs) and other generative AI applications.
AMD EPYC server CPUs are a great fit for many inference use cases. These include classic machine learning, computer vision, memory-intensive graph analysis, recommendation systems, natural language processing, and small to medium generative AI models, like LLMs. They’re also ideal for expertly tuned AI agents and collaborative prompt-based pre-processing, which are popular in retrieval of augmented generation (RAG) models.
5th Gen AMD EPYC server CPUs deliver 70% better end-to-end AI performance than Intel Xeon 6.4 They also offer up to 89% better chatbot performance on DeepSeek with AMD EPYC 9965 vs. Intel Xeon 6980P8 and impressive performance for LLMs.
If you need to comply with data locality or privacy requirements, or have strict requirements for low latency, consider running AI on premises. If you need the flexibility to scale up or down quickly, the cloud is a great choice for on-demand resources.
With AMD EPYC server CPUs, you can choose from a range of core, frequency, memory, and power options. You’ll get the best results by matching the CPU to the AI workloads you expect to run the most.
For real-world AI and machine learning applications, AMD EPYC 9965 outperforms the Intel Xeon 6980P.
Confidently deploy chatbots, intelligent search agents, and other generative AI applications with performance for LLMs up to multi-billion parameters. The AMD EPYC 9965 outperforms the Intel Xeon 6980P.
Match your infrastructure needs to your AI ambitions. AMD offers the broadest AI portfolio, open standards-based platforms, and a powerful ecosystem—all backed by performance leadership.
With AMD ZenDNN and AMD ROCm™ software, developers can optimize their application performance while using their choice of frameworks.