A Note from Darren Grasby
Partners,
As I reflect on 2025, what stands out the most is not just the scale of activity in the market, but the way our teams have come together to turn that momentum into real outcomes for you, our customers. Thank you for the partnership, trust, and commitment that you bring every day.
This has been a year where AI, high-performance computing, and modern client platforms moved from promising concepts to essential tools for how organizations operate and compete. We see these technologies creating tangible impact across many industries, and that impact is being driven by how you bring AMD solutions to life for our customers.
None of this progress happens alone. Your expertise, execution, and shared belief in what’s possible continues to push us forward. We’re grateful for the collaboration and are excited about the opportunities ahead as we help customers innovate with speed, confidence, and purpose.
A Look at Pivotal 2025 Moments
We remain deeply committed to partnering closely with you, delivering more for our customers, innovating with speed, and moving forward together toward shared success.
A few moments this year truly stood out as examples of how we win together:
- We announced a massive multi-year partnership with OpenAI, delivering 6 gigawatts of AMD Instinct™ GPUs beginning next year.
- At our Advancing AI event, we unveiled that Oracle Cloud Infrastructure (OCI) would be the first hyperscaler to adopt and deploy the new AMD Instinct™ MI355X GPUs.
- Also powered by AMD Instinct™ MI355X GPUs and announced in partnership with OCI, HPE, and ORNL, we introduced the country’s first dedicated AI Factory supercomputer that will expand the Department of Energy’s AI leadership and accelerate progress across areas including AI, energy research, materials, medicine, and advanced manufacturing.
- We announced that HPE will be among the first OEMs to adopt the AMD Instinct™ GPU "Helios" AI rack solution—pairing it with purpose-built HPE Juniper Networking switches, developed with Broadcom, to deliver high-bandwidth, low-latency performance for large-scale AI clusters. We also introduced “Herder,” a new supercomputer built with HPE and powered by AMD Instinct™ MI430X GPUs and next-generation AMD EPYC™ server CPUs to support advanced HPC and sovereign AI research across Europe.
- We continue to deepen collaboration with key partners such as Microsoft and Meta, optimizing AMD technologies for new Microsoft Copilot+ AI experiences like Cocreator, while advancing infrastructure commitments and AI development initiatives together.
- We introduced the AMD Ryzen™ 9000X3D processors and AMD Ryzen™ Threadripper™ 9000 Series desktop processors, the AMD Ryzen™ 9000HX processors and Ryzen™ Z2 Series mobile processors, with a focus on giving people more capability to work, play, and create on their terms.
- Expanded our graphics portfolio with the AMD Radeon™ RX 9000 Series GPUs, AMD Instinct™ MI350 Series processors, and AMD Radeon™ AI PRO R9700 GPUs. These platforms enable customers across a wide range of segments to take meaningful steps forward in performance, efficiency, and AI readiness.
- In the data center, the launch of the AMD EPYC™ 4005 Series server CPUs gave mid-market customers a new option that balances performance with efficiency and affordability. With these configurations, customers can reduce their data center footprint significantly compared to competitive offerings, helping them lower cost and energy use while scaling their environments.1
- We unveiled our most extensive commercial notebook lineup to date, offering unprecedented choice and versatility for businesses worldwide.
- We introduced the next generation of Copilot+ PCs advanced by AMD Ryzen™ AI 300 Series processors in over 150 designs.
- The AMD Radeon™ RX 9070 XT GPUs delivered one of our strongest graphics launches to date, reflecting clear customer demand and tight execution across the team.
- The AMD Ryzen™ AI MAX 300 Series processors captured positive recognition from press and industry evaluators for advancing AI and edge computing capabilities.
Advancing Customer Experiences Through Integrated Solutions
Our hardware only reaches its full potential because of the work our software teams do alongside it. Their efforts continue to open new possibilities for customers and help ensure every processor, graphics card, and accelerator delivers meaningful performance where it matters.
In the consumer space, we introduced AMD FidelityFX™ Super Resolution 4 and, more recently, AMD FSR™ “Redstone” technology. These machine-learning-powered technologies bring stronger performance and improved visual fidelity to AMD RDNA™ 4 graphics, giving gamers more flexibility and helping them get the most from the systems they already have.
For developers, we expanded our AI software capabilities with the introduction of AMD ROCm™ 7 software. This release provides a significant performance lift over previous generations and supports the latest models, algorithms, and hardware.2 Our goal with ROCm 7 is simple: make it easier and faster for developers to build, scale, and innovate with AI.
A Bright Year Ahead
As we look ahead to 2026, strengthening our partnerships remains a priority for me and for everyone at AMD. Our focus is simple: to support your growth and help you move quickly as new opportunities emerge.
To do that, we’re introducing a set of initiatives throughout next year—expanded training and certifications, updated rewards, and enhanced channel tools for our commercial partners. We’re also putting additional attention on areas where mid-market customers are seeing the most demand, so you have what you need to scale with confidence. These investments are the start of a broader effort, and we’ll continue to build on them as we listen and learn together.
Wishing you, your teams, and your families a wonderful holiday season. I’m grateful for the collaboration this year and look forward to what we will accomplish together in 2026.
Regards,
Darren Grasby
Executive Vice President, Chief Sales Officer and President AMD EMEA
AMD Arena
Enhance your AMD product knowledge with training on AMD Ryzen™ PRO, AMD EPYC™, AMD Instinct™, and more.
Subscribe
Get monthly updates on AMD’s latest products, training resources, and Meet the Experts webinars.
Related Articles
Related Training Courses
Related Webinars
Footnotes
- 9xx5TCO-005 This scenario contains many assumptions and estimates and, while based on AMD internal research and best approximations, should be considered an example for information purposes only, and not used as a basis for decision making over actual testing. The AMD Server & Greenhouse Gas Emissions TCO (total cost of ownership) Estimator Tool - version 1.3, compares the selected AMD EPYC™ and Intel® Xeon® CPU based server solutions required to deliver a TOTAL_PERFORMANCE of 391000 units of SPECrate®2017_int_base performance as of November 21, 2024. This estimation compares upgrading from a legacy 2P Intel Xeon 28 core Platinum_8280 based server with a score of 391 (https://spec.org/cpu2017/results/res2020q3/cpu2017-20200915-23984.pdf) versus 2P EPYC 9965 (192C) powered server with a score of 3100 (https://spec.org/cpu2017/results/res2024q4/cpu2017-20241004-44979.pdf). Environmental impact estimates made leveraging this data, using the Country / Region specific electricity factors from Country Specific Electricity Factors - 2024, and the United States Environmental Protection Agency Greenhouse Gas Equivalencies Calculator.
- MI300-080 -Testing by AMD Performance Labs as of May 15, 2025, measuring the inference performance in tokens per second (TPS) of AMD ROCm 6.x software, vLLM 0.3.3 vs. AMD ROCm 7.0 preview version SW, vLLM 0.8.5 on a system with (8) AMD Instinct MI300X GPUs running Llama 3.1-70B (TP2), Qwen 72B (TP2), and Deepseek-R1 (FP16) models with batch sizes of 1-256 and sequence lengths of 128-204. Stated performance uplift is expressed as the average TPS over the (3) LLMs tested.
Hardware Configuration
1P AMD EPYC™ 9534 CPU server with 8x AMD Instinct™ MI300X (192GB, 750W) GPUs, Supermicro AS-8125GS-TNMR2, NPS1 (1 NUMA per socket), 1.5 TiB (24 DIMMs, 4800 mts memory, 64 GiB/DIMM), 4x 3.49TB Micron 7450 storage, BIOS version: 1.8
Software Configuration(s)
Ubuntu 22.04 LTS with Linux kernel 5.15.0-119-generic
Qwen 72B and Llama 3.1-70B -
ROCm™ software 7.0 preview version SW
PyTorch 2.7.0. Deepseek R-1 - ROCm 7.0 preview version, SGLang 0.4.6, PyTorch 2.6.0
vs.
Qwen 72 and Llama 3.1-70B - ROCm 6.x GA SW
PyTorch 2.7.0 and 2.1.1, respectively,
Deepseek R-1: ROCm 6.x GA SW
SGLang 0.4.1, PyTorch 2.5.0
Server manufacturers may vary configurations, yielding different results. Performance may vary based on configuration, software, vLLM version, and the use of the latest drivers and optimizations.
- 9xx5TCO-005 This scenario contains many assumptions and estimates and, while based on AMD internal research and best approximations, should be considered an example for information purposes only, and not used as a basis for decision making over actual testing. The AMD Server & Greenhouse Gas Emissions TCO (total cost of ownership) Estimator Tool - version 1.3, compares the selected AMD EPYC™ and Intel® Xeon® CPU based server solutions required to deliver a TOTAL_PERFORMANCE of 391000 units of SPECrate®2017_int_base performance as of November 21, 2024. This estimation compares upgrading from a legacy 2P Intel Xeon 28 core Platinum_8280 based server with a score of 391 (https://spec.org/cpu2017/results/res2020q3/cpu2017-20200915-23984.pdf) versus 2P EPYC 9965 (192C) powered server with a score of 3100 (https://spec.org/cpu2017/results/res2024q4/cpu2017-20241004-44979.pdf). Environmental impact estimates made leveraging this data, using the Country / Region specific electricity factors from Country Specific Electricity Factors - 2024, and the United States Environmental Protection Agency Greenhouse Gas Equivalencies Calculator.
- MI300-080 -Testing by AMD Performance Labs as of May 15, 2025, measuring the inference performance in tokens per second (TPS) of AMD ROCm 6.x software, vLLM 0.3.3 vs. AMD ROCm 7.0 preview version SW, vLLM 0.8.5 on a system with (8) AMD Instinct MI300X GPUs running Llama 3.1-70B (TP2), Qwen 72B (TP2), and Deepseek-R1 (FP16) models with batch sizes of 1-256 and sequence lengths of 128-204. Stated performance uplift is expressed as the average TPS over the (3) LLMs tested.
Hardware Configuration
1P AMD EPYC™ 9534 CPU server with 8x AMD Instinct™ MI300X (192GB, 750W) GPUs, Supermicro AS-8125GS-TNMR2, NPS1 (1 NUMA per socket), 1.5 TiB (24 DIMMs, 4800 mts memory, 64 GiB/DIMM), 4x 3.49TB Micron 7450 storage, BIOS version: 1.8
Software Configuration(s)
Ubuntu 22.04 LTS with Linux kernel 5.15.0-119-generic
Qwen 72B and Llama 3.1-70B -
ROCm™ software 7.0 preview version SW
PyTorch 2.7.0. Deepseek R-1 - ROCm 7.0 preview version, SGLang 0.4.6, PyTorch 2.6.0
vs.
Qwen 72 and Llama 3.1-70B - ROCm 6.x GA SW
PyTorch 2.7.0 and 2.1.1, respectively,
Deepseek R-1: ROCm 6.x GA SW
SGLang 0.4.1, PyTorch 2.5.0
Server manufacturers may vary configurations, yielding different results. Performance may vary based on configuration, software, vLLM version, and the use of the latest drivers and optimizations.