AMD Cements Data Center Leadership at Financial Analyst Day 2025
Nov 11, 2025
News Snapshot
- AMD reiterated its annual accelerator cadence and previewed “Helios” rack-level AI solution targeted for Q3’26 availability
- AMD ROCm adoption is accelerating, with downloads up ~10x YoY; a strong signal of developer momentum
- AMD EPYC processors continue their strong adoption rate, with a ~40% revenue market share1
AI is transforming every layer of the data center, creating a new industrial revolution of IT. The modern AI factory requires harmonized tools engineered to work together: accelerated clusters for LLMs, general‑purpose compute and storage to serve them, and advanced networking to secure and accelerate data movement.
At our Financial Analyst Day, we laid out how AMD is building the end‑to‑end platform with EPYC™ CPUs for compute, AMD Instinct™ GPUs for training and inference, Pensando™ networking to offload and protect the data path, and ROCm™ to unify the software stack. From chiplets to clusters, we’re engineering the performance and efficiency that will allow customers to run AI at a global scale.
Connecting Billions of Users Every Day
Data-intensive companies rely on AMD EPYC processors to run mission-critical workloads. From hyperscalers and cloud-native innovators like Netflix and Uber to enterprises modernizing their data centers, AMD continues to gain momentum.
- 60 percent of the Fortune® 100 companies have deployed AMD EPYC processors2.
- 5th Gen AMD EPYC™ processors can accelerate time to results by 1.8x for industry workloads3 and can enable approximately 80% lower TCO for core IT work4.
"Venice," the next generation of EPYC CPUs, is expected to extend leadership in performance, density and energy efficiency for AI and general-purpose workloads.
Advancing AI Leadership Across AMD Instinct GPUs and ROCm
We are accelerating AI innovation with an open-source software stack that scales from AI inference to training. Seven of the world’s top ten AI companies now deploy AMD Instinct accelerators at scale, underscoring growing industry trust in AMD hardware, software, and systems.
At Financial Analyst Day, we also revealed more about the next generation AMD Instinct™ MI400 portfolio, which is designed to advance AI and scientific computing with leadership performance and efficiency. Building on this momentum, AMD detailed the following products within the MI400 family:
- The AMD Instinct MI455X GPU. The most advanced AI accelerator; they are co-designed for next generation AI workloads and will utilize the AMD leadership chiplet architecture, 3.5D packaging and 12x HBM4 stacks of memory to drive leadership performance and scale for AI training and Inference workloads. AMD Instinct MI455X GPUs power the AMD “Helios” rack-scale platform, which provides the foundation to deliver the open, scalable infrastructure that will power the world’s growing AI demands.
- The AMD Instinct MI430X GPU. Optimized for a blend of Sovereign AI and HPC compute, these GPUs will deliver hardware based FP64 performance for HPC workloads and high-capacity HBM4 memory for AI capabilities. The AMD Instinct MI430X GPUs are at the heart of the newly announced Discovery supercomputer at Oak Ridge National Labs.
At the foundation of this innovation at scale is the AMD ROCm open software stack. Delivering a unified software stack and consistent performance for everything from a single GPU to entire clusters, ROCm puts open source and developers first and its adoption is rapidly accelerating, with a 10x year-over-year increase in downloads and 2 million models supported on Hugging Face.
AMD also highlighted its relentless progress and focus on developers with ROCm including:
- Building on software in leading open-source AI projects, with support for frameworks and tools such as PyTorch, TensorFlow, JAX, Triton, Hugging Face Transformers, vLLM, SGLang, Ollama, ComfyUI and Unsloth.
- Building out higher levels of abstraction to make code portable across platforms
- Expanding AI capabilities to help developers optimize code generation and automate GPU Programming.
With ROCm, AMD continues to shorten the path from idea to deployment. Through faster time to releases, deeper framework integration, and a developer-first approach, AMD is giving teams the tools to push performance boundaries and bring new AI innovations to market faster.
Networking that Scales with AI
AMD advanced networking technologies provide the fabric that stitches AI solutions together, creating an open, scalable and adaptive platform for front-end, scale-up, scale-out and scale-across networking - accelerating intensive workloads and enabling innovation at the speed of AI.
- Front-End connects users, storage, and applications by accelerating AI and cloud workloads, offloading networking and security tasks from CPUs and GPUs, and helping secure data in motion with line-rate encryption and stateful firewall protection. AMD programmable P4-based engines power our Pensando™ NICs and allow incredible flexibility in offloading concurrent services – SDN, storage, security – while keeping full line-rate performance.
- Scale up tightly couples hundreds of GPUs in a cluster. AMD Instinct™ MI450 Series GPUs can deliver up to 3.6 TB/s of bandwidth per GPU, and using the UALink™ protocol, AMD enables coherent GPU-to-GPU communication across the scale-up fabric.
- Scale out links hundreds of thousands of GPUs to work as one. AMD Pensando AI NICs are Ultra Ethernet Consortium (UEC) ready, delivering the performance and reliability needed for large-scale AI workloads. These NICs feature programmable architectures that evolve rapidly to support to reduce switch costs, advanced reliability mechanisms, and accelerator offloads that enhance workload performance.
- Scale across federates data centers for giga-scale performance, with intelligent traffic management, dynamic load balancing, and an open systems approach that maintains line-rate communication between distributed environments.
From Exascale to Enterprise AI: A Unified Compute Platform
Stitching it all together, we showcased the “Helios” rack-scale platform that brings together AMD EPYC CPUs, AMD Instinct GPUs, AMD Pensando advanced networking and AMD ROCm software to deliver performance, efficiency and scalability for large-scale AI infrastructure. “Helios” is expected to be available in Q3 2026. We are delivering solutions on an annual cadence, with our next generation AI rack-scale platform scheduled to launch in 2027, powered by AMD EPYC “Verano” CPUs, AMD Instinct MI500 Series GPUs and AMD Pensando “Vulcano” networking.
Advancing the AI Factory of the Future
AMD continues to drive the technologies that make the world’s most advanced computing possible. By unifying compute, acceleration, networking, and software into a cohesive platform, AMD is redefining performance and efficiency for the data centers powering the AI revolution.
With proven execution, deep ecosystem support and a clear product roadmap, AMD is positioned to lead the next era of the data center and AI infrastructure.
Cautionary Statement
This blog contains forward-looking statements concerning Advanced Micro Devices, Inc. (AMD) such as the features, functionality, performance, availability, timing and expected benefits of AMD products, including “Helios” rack-scale platform; AMD Instinct™ MI400 series accelerators and MI500 series accelerators; and AMD being positioned to lead the next era of data center and AI infrastructure, which are made pursuant to the Safe Harbor provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are commonly identified by words such as "would," "may," "expects," "believes," "plans," "intends," "projects" and other terms with similar meaning. Investors are cautioned that the forward-looking statements in this presentation are based on current beliefs, assumptions and expectations, speak only as of the date of this presentation and involve risks and uncertainties that could cause actual results to differ materially from current expectations. Such statements are subject to certain known and unknown risks and uncertainties, many of which are difficult to predict and generally beyond AMD's control, that could cause actual results and other future events to differ materially from those expressed in, or implied or projected by, the forward-looking information and statements. Investors are urged to review in detail the risks and uncertainties in AMD’s Securities and Exchange Commission filings, including but not limited to AMD’s most recent reports on Forms 10-K and 10-Q.
AMD does not assume, and hereby disclaims, any obligation to update forward-looking statements made in this presentation, except as may be required by law.
Footnotes
- EPYC-055A - Mercury Research Sell-In Revenue Shipment Estimates, Q2 2025. Revenue share of 41.0%, unit share of 27.3%.
- EPYC-058 - Top 100 U.S. companies by revenue according to 2025 Fortune 500 list as of June 2, 2025. https://fortune.com/ranking/fortune500/. 'Fortune 100’ refers to the top 20% ranked companies in the 2025 Fortune 500 list, published in June 2025. From Fortune Magazine. ©2025 Fortune Media IP Limited. All rights reserved. Used under license.Fortune and Fortune Media IP Limited are not affiliated with, and do not endorse products or services of Advanced Micro Devices, Inc.
- 9xx5-142: AMD testing as of 04/02/2025. The detailed results show the average uplift of the performance metric (ns/day) of this benchmark for a 2P 192-Core AMD EPYC™ 9965 powered reference system compared to a 2P Intel® Xeon® 6980P powered production system running select tests on Open-Source GROMACS 2023.1. Uplifts for the performance metric normalized to the Intel® Xeon® 6980P follow for each benchmark: * gmx_water1536K_PME: ~1.76x * benchPEP: ~1.83x System Configurations CPU: 2P Intel® Xeon® 6980P (256 total cores) Memory: 24x 96 GB DDR5-6400 Storage: SOLIDIGM SBFPF2BU076T 7.68 TB NVMe Platform and BIOS: BIOS Options: SMT=OFF, SNC=3, HPC Workload Profile OS: rhel 9.4 5.14.0-427.16.1.el9_4.x86_64 Kernel Options: BOOT_IMAGE=(hd1,gpt2)/vmlinuz-5.14.0-427.16.1.el9_4.x86_64 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M rhgb mitigations=off tsc=nowatchdog nmi_watchdog=0 intel_pstate=disable processor.max_cstate=1 intel_idle.max_cstate=0 iommu=pt Runtime Options: cpupower idle-set -d 2 cpupower frequency-set -g performance echo 3 > /proc/sys/vm/drop_caches echo 0 > /proc/sys/kernel/nmi_watchdog echo 0 > /proc/sys/kernel/numa_balancing echo 0 > /proc/sys/kernel/randomize_va_space echo 'always' > /sys/kernel/mm/transparent_hugepage/enabled echo 'always' > /sys/kernel/mm/transparent_hugepage/defrag CPU: 2P 192-Core AMD EPYC™ 9965 (384 total cores) Memory: 24x 64 GB DDR5-6400 Storage: SAMSUNG MZWLO3T8HCLS-00A07 3.84 TB NVMe Platform and BIOS: RVOT1000C BIOS Options: SMT=Off NPS=4 Power Determinism Mode OS: rhel 9.4 5.14.0-427.16.1.el9_4.x86_64 Kernel Options: amd_iommu=on iommu=pt mitigations=off tsc=nowatchdog nmi_watchdog=0 Runtime Options: cpupower idle-set -d 2 cpupower frequency-set -g performance echo 3 > /proc/sys/vm/drop_caches echo 0 > /proc/sys/kernel/nmi_watchdog echo 0 > /proc/sys/kernel/numa_balancing echo 0 > /proc/sys/kernel/randomize_va_space echo 'always' > /sys/kernel/mm/transparent_hugepage/enabled echo 'always' > /sys/kernel/mm/transparent_hugepage/defrag Results may vary based on system configurations, software versions, and BIOS settings.
- 9xx5TCO-005: This scenario contains many assumptions and estimates and, while based on AMD internal research and best approximations, should be considered an example for information purposes only, and not used as a basis for decision making over actual testing. The AMD Server & Greenhouse Gas Emissions TCO (total cost of ownership) Estimator Tool - version 1.3, compares the selected AMD EPYC™ and Intel® Xeon® CPU based server solutions required to deliver a TOTAL_PERFORMANCE of 391000 units of SPECrate®2017_int_base performance as of November 21, 2024. This estimation compares upgrading from a legacy 2P Intel Xeon 28 core Platinum_8280 based server with a score of 391 (https://spec.org/cpu2017/results/res2020q3/cpu2017-20200915-23984.pdf ) versus 2P EPYC 9965 (192C) powered server with a score of 3100 (https://spec.org/cpu2017/results/res2024q4/cpu2017-20241004-44979.pdf) and compared to a 2P Intel Xeon 64 core Platinum_8592+ (64C) based server with a SPECrate2017_int_base score of 1130, https://spec.org/cpu2017/results/res2023q4/cpu2017-20231127-40064.pdf. For additional details, see https://www.amd.com/en/claims/epyc.html#q=9xx5TCO-005. SPEC®, SPEC CPU®, and SPECpower® are registered trademarks of the Standard Performance Evaluation Corporation. See www.spec.org for more information. Intel CPU specifications at https://ark.intel.com/. FOR USE ONLY WHEN ENVIRONMENTAL DATA IS CITED: Environmental impact estimates made leveraging this data, using the Country / Region specific electricity factors from Country Specific Electricity Factors - 2024, and the United States Environmental Protection Agency Greenhouse Gas Equivalencies Calculator.
- EPYC-055A - Mercury Research Sell-In Revenue Shipment Estimates, Q2 2025. Revenue share of 41.0%, unit share of 27.3%.
- EPYC-058 - Top 100 U.S. companies by revenue according to 2025 Fortune 500 list as of June 2, 2025. https://fortune.com/ranking/fortune500/. 'Fortune 100’ refers to the top 20% ranked companies in the 2025 Fortune 500 list, published in June 2025. From Fortune Magazine. ©2025 Fortune Media IP Limited. All rights reserved. Used under license.Fortune and Fortune Media IP Limited are not affiliated with, and do not endorse products or services of Advanced Micro Devices, Inc.
- 9xx5-142: AMD testing as of 04/02/2025. The detailed results show the average uplift of the performance metric (ns/day) of this benchmark for a 2P 192-Core AMD EPYC™ 9965 powered reference system compared to a 2P Intel® Xeon® 6980P powered production system running select tests on Open-Source GROMACS 2023.1. Uplifts for the performance metric normalized to the Intel® Xeon® 6980P follow for each benchmark: * gmx_water1536K_PME: ~1.76x * benchPEP: ~1.83x System Configurations CPU: 2P Intel® Xeon® 6980P (256 total cores) Memory: 24x 96 GB DDR5-6400 Storage: SOLIDIGM SBFPF2BU076T 7.68 TB NVMe Platform and BIOS: BIOS Options: SMT=OFF, SNC=3, HPC Workload Profile OS: rhel 9.4 5.14.0-427.16.1.el9_4.x86_64 Kernel Options: BOOT_IMAGE=(hd1,gpt2)/vmlinuz-5.14.0-427.16.1.el9_4.x86_64 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M rhgb mitigations=off tsc=nowatchdog nmi_watchdog=0 intel_pstate=disable processor.max_cstate=1 intel_idle.max_cstate=0 iommu=pt Runtime Options: cpupower idle-set -d 2 cpupower frequency-set -g performance echo 3 > /proc/sys/vm/drop_caches echo 0 > /proc/sys/kernel/nmi_watchdog echo 0 > /proc/sys/kernel/numa_balancing echo 0 > /proc/sys/kernel/randomize_va_space echo 'always' > /sys/kernel/mm/transparent_hugepage/enabled echo 'always' > /sys/kernel/mm/transparent_hugepage/defrag CPU: 2P 192-Core AMD EPYC™ 9965 (384 total cores) Memory: 24x 64 GB DDR5-6400 Storage: SAMSUNG MZWLO3T8HCLS-00A07 3.84 TB NVMe Platform and BIOS: RVOT1000C BIOS Options: SMT=Off NPS=4 Power Determinism Mode OS: rhel 9.4 5.14.0-427.16.1.el9_4.x86_64 Kernel Options: amd_iommu=on iommu=pt mitigations=off tsc=nowatchdog nmi_watchdog=0 Runtime Options: cpupower idle-set -d 2 cpupower frequency-set -g performance echo 3 > /proc/sys/vm/drop_caches echo 0 > /proc/sys/kernel/nmi_watchdog echo 0 > /proc/sys/kernel/numa_balancing echo 0 > /proc/sys/kernel/randomize_va_space echo 'always' > /sys/kernel/mm/transparent_hugepage/enabled echo 'always' > /sys/kernel/mm/transparent_hugepage/defrag Results may vary based on system configurations, software versions, and BIOS settings.
- 9xx5TCO-005: This scenario contains many assumptions and estimates and, while based on AMD internal research and best approximations, should be considered an example for information purposes only, and not used as a basis for decision making over actual testing. The AMD Server & Greenhouse Gas Emissions TCO (total cost of ownership) Estimator Tool - version 1.3, compares the selected AMD EPYC™ and Intel® Xeon® CPU based server solutions required to deliver a TOTAL_PERFORMANCE of 391000 units of SPECrate®2017_int_base performance as of November 21, 2024. This estimation compares upgrading from a legacy 2P Intel Xeon 28 core Platinum_8280 based server with a score of 391 (https://spec.org/cpu2017/results/res2020q3/cpu2017-20200915-23984.pdf ) versus 2P EPYC 9965 (192C) powered server with a score of 3100 (https://spec.org/cpu2017/results/res2024q4/cpu2017-20241004-44979.pdf) and compared to a 2P Intel Xeon 64 core Platinum_8592+ (64C) based server with a SPECrate2017_int_base score of 1130, https://spec.org/cpu2017/results/res2023q4/cpu2017-20231127-40064.pdf. For additional details, see https://www.amd.com/en/claims/epyc.html#q=9xx5TCO-005. SPEC®, SPEC CPU®, and SPECpower® are registered trademarks of the Standard Performance Evaluation Corporation. See www.spec.org for more information. Intel CPU specifications at https://ark.intel.com/. FOR USE ONLY WHEN ENVIRONMENTAL DATA IS CITED: Environmental impact estimates made leveraging this data, using the Country / Region specific electricity factors from Country Specific Electricity Factors - 2024, and the United States Environmental Protection Agency Greenhouse Gas Equivalencies Calculator.