Discover the world’s leading CPU for AI at SC24

Nov 13, 2024

Artificial intelligence (AI) is fueling business growth and innovation on a global scale. AI technologies bring broad industry impact, transforming how we work and driving efficiency, insights, and competitiveness. Organizations of nearly every size can harness the power of AI to uncover new opportunities, optimize their operations, and deliver value to customers in ways that were previously unimaginable. We are only beginning to see the potential of AI on applications that automate processes in manufacturing and automotive, curb financial fraud, and create breakthroughs in medical research. The possibilities are truly limitless, yet organizations often lack the infrastructure to keep pace with the demands of AI.

AI requires exceptional performance, flexibility, and capacity to process massive datasets and convert the data into real-time insights. Many data centers are already running at or near capacity in terms of available space, power, or both. To overcome these challenges, organizations must free up space and energy to accommodate AI in their data centers.

AMD helps organizations transform their environments with purposefully engineered solutions that are built for highly complex workloads. AMD EPYC™ processors are the world’s leading CPU for AI.i AMD EPYC processor-based servers offer leadership performance and efficiency to enable material workload consolidation. This capability allows for more space and energy to support new AI workloads in existing data centers. As a result, organizations can accelerate the full range of data center workloads — including general purpose AI, model development, testing, training, and inference.

5th Generation AMD EPYC processors are the latest addition to this robust family. The latest processors offer key advantages for organizations looking to accomplish more with AI:

  • Maximizing per-server performance. 5th Generation AMD EPYC processors can match integer performance of legacy hardware with up to 86% fewer racks, dramatically reducing physical footprint, power consumption, and the number of software licenses needed. This frees up space for new or expanded AI workloads.ii
  • Delivering leadership AI inference performance. Many AI workloads can run efficiently on CPU-only servers that feature 5th Generation AMD EPYC processors. Some of these workloads include language models with up to 13 billion parameters, image and fraud analysis, or recommendation systems. Servers running two of the latest CPUs offer up to 2x inference throughput when compared to previous generation offerings.iii
  • Increasing GPU acceleration. For larger and more demanding workloads, GPUs may be the right choice for AI workload processing. The AMD EPYC family includes options that are optimized to be host-CPUs for GPU-enabled systems to help increase performance on select AI workloads and improve the ROI of advanced GPU AI engines. For example, a high frequency AMD EPYC 9575F processor powered server with 8x GPUs delivers up to 20% greater system performance than a server with Intel Xeon 8592+ processors as the host CPU with the same 8x GPUs running Llama3.1-70B.iv
  • Offering a broad ecosystem of support. AMD collaborates with an extensive network of solution providers whose solutions feature the latest AMD EPYC processors. Companies and government organizations around the globe trust AMD to enhance their most important workloads. 5th Generation AMD EPYC processors are available today, with support from industry leaders in supercomputing and AI as well as all major ODMs and cloud service providers.

Want to learn more about the world’s leading CPU for AI? Join us at SC24 on November 17th–22nd in Atlanta. Visit the AMD booth to meet with our experts and watch technology demos showcasing AMD EPYC processors at Hardware Zone 4 and Zone 2 #8 and #9.

Let’s accelerate your path to AI leadership.

Share:

Article By


Related Blogs