Innovating Since the Beginning

Since its founding in 1969, AMD has defined a legacy of innovation by delivering powerful new technologies while remaining focused on sustainability.

Fast forward to now, and the teams here at AMD are still hard at work delivering cutting-edge innovations that don’t compromise on performance or the broader impact those innovations have on the world. This year marks 30 years of corporate responsibility reporting—underscoring a track record of innovation committed to sustainable practices. Read on to discover the work that’s been focused on sustainability in recent years, and how partners like you help pave the way for a better future. 

The Revolutionary Shift in Computing Demand

The rise of data centers and the unprecedented demand for AI compute both mean that customers are using more power and hardware than ever before to drive modern computing. This is driving a need for more efficient, sustainable solutions that can meet today’s challenges—and tomorrow’s—in the digital environment, without negatively impacting our own. Through energy-efficient innovation, transparent practices, and dedication to corporate responsibility, AMD supports a more resilient future for both society and the planet. Here’s how.

How AMD is Helping the Planet

AMD has long focused on improving its overall impact on the planet, and in recent years has achieved major milestones while setting bold new targets for 2030:

  • AMD set goal to reduce its operational (Scope 1 & 2) greenhouse gas (GHG) emissions by 50% from 2020 to 2030. The company has achieved a 28% reduction in GHG emissions, despite a 33% global increase in electricity usage (2020-2024).1
  • In 2024, AMD sourced 118GWh of renewable electricity, which represents half of its global electricity usage. For example, the AMD campus in San Jose utilizes onsite solar generation using a 1.4MW solar system and a further 600 kW rooftop solar installation.
  • AMD launched new initiatives aiming to increase recycled content in our products, thereby advancing our circular economy and decarbonization strategies.
  • In 2024, 87% of AMD Manufacturing Suppliers2 had public GHG emissions goals and 74% sourced renewable energy.3,4 
  • The total amount of water used for manufacturing AMD wafers decreased by 12% between 2023 and 2024, a reduction of 1.5 billion liters of water. 

Beyond resources, AMD is priming its products to help deliver incredible savings when it comes to efficiency; as of mid-2025, AMD has exceeded its 30x25 goal,5 achieving a 38x increase over the base system, using a current configuration of four AMD Instinct™ MI355X GPUs and one 5th Gen AMD EPYC™ CPU.6 That’s a 97% reduction in energy usage for the same performance. 

With more customers looking to virtualization to maximize how they invest their resources, AMD is paving the way for even greater efficiencies there too; 11 servers powered by dual AMD EPYC™ 9654 CPUs can support up to 2000 virtual machines (VMs) – the same load which would require 17 competitor-equivalent servers. That’s 35% fewer servers, 29% energy savings across a three-year period, and carbon savings equal to 38 acres of U.S. forests each year.7

As AI continues to scale, and as we move toward true end-to-end design of full AI systems, it’s more important than ever for us to continue our leadership in energy-efficient design work. That’s why we set a bold new target: a 20x improvement in rack-scale energy efficiency for AI training and inference by 2030, from a 2024 base year.8 Our new 2030 rack-level energy efficiency goal has major implications for equipment consolidation. Using training of a typical AI model in 2025 as a benchmark, the gains could enable:9

  • Rack consolidation from more than 275 racks to <1 fully utilized rack
  • More than a 95% reduction in operational electricity use
  • Carbon emission reduction from approximately 3,000 to 100 metric tCO2 for model training

How AMD is Helping People

Improving the world goes beyond energy savings. It’s also about empowering people, from AMD employees to communities worldwide, and respecting the human rights of the workers who make AMD products. 

Here’s how AMD is making an impact on the people it works with and supports every day:

  • With a goal of benefitting 100 million people through AMD and AMD Foundation philanthropy and partnerships, enabling STEM education, scientific research, and the future workforce, 84.1 million people have benefited from the AMD University Program and STEM initiatives between 2021 and 2024.10 
  • AMD donated technology to more than 800 universities, research institutions, and non-profit organizations in 2024.
  • AMD awarded additional STEM-related grants to nearly 40 non-profit organizations and schools in 2024.
  • The AMD AI and HPC Fund enables high-impact research and education by providing academic researchers and educators with access to AMD computing technology through donations of on-premises equipment and remote access to AMD clusters. From 2020 to 2024, AMD has donated over 30 petaflops of computing capacity with a total market value of more than US$35.3 million.
  • More than 8,100 AMD employees volunteered more than 33,000 hours in 2024, a 43% increase compared to the previous year.
  • AMD launched five new employee mentoring programs, supporting professional growth and workplace culture.
  • In 2024, 61% of employees participated in AMD employee resource groups and/or other AMD inclusion initiatives, up 10% from 2023; the aim is to reach 70% by the end of 2025.
  • AMD is ranked in the top 15% of ICT companies in the KnowTheChain benchmark; a tool that evaluates company efforts to address forced labor and human trafficking in supply chains.

Committed to a Better Future

As demand for computing power continues to rise, AMD has shown that progress doesn’t need to come at the planet’s expense. From surpassing the ambitious 30x25 energy efficiency goal to setting a bold new target of 20x rack-scale energy efficiency by 2030, AMD is proving that innovation and sustainability go hand in hand. AMD technology already powers 60% of the world’s top 20 most energy-efficient supercomputers, accelerates climate and medical research, and helps customers achieve greater performance while reducing their environmental footprint. Recent benchmark data confirms this leadership, with systems advanced by AMD EPYC CPUs, like Frontier and Adastra, delivering record-breaking FLOPS-per-watt performance, contributing over 500 world-record server benchmarks.  

If you’d like to learn more about how AMD and partners like you help to empower people, protect the planet, and accelerate solutions to global challenges, you can find the AMD 2024-25 Corporate Responsibility Report here.

AMD Arena


Enhance your AMD product knowledge with training on AMD Ryzen™ PRO, AMD EPYC™, AMD Instinct™, and more.

Related Articles

Related Training Courses

Related Webinars

Footnotes
  1. Reported data includes Scope 1 and 2 GHG emissions (base year 2020). Based on AMD calculations that are third-party verified (limited level assurance).
  2. “Manufacturing Suppliers” are defined as suppliers that AMD buys from directly and that provide direct materials and/or manufacturing services to AMD.
  3. AMD defines renewable energy as energy from a source that is not depleted when used, such as wind or solar power. AMD does not require a minimum amount of renewable energy to be sourced by Manufacturing Suppliers to be included in the goal. Data is provided by AMD suppliers and has not been independently verified by AMD.
  4. AMD calculations are third-party verified (limited level assurance) based on data supplied by our Manufacturing Suppliers, which is not independently verified by AMD.
  5. Includes AMD high-performance CPU and GPU accelerators used for AI-training and high-performance computing in a 4-Accelerator, CPU-hosted configuration. Goal calculations are based on performance scores as measured by standard performance metrics (HPC: Linpack DGEMM kernel FLOPS with 4k matrix size; AI-training: lower precision training-focused floating-point math GEMM kernels such as FP16 or BF16 FLOPS operating on 4k matrices) divided by the rated power consumption of a representative accelerated compute node, including the CPU host + memory and 4 GPU accelerators.
  6. EPYC-030a: Calculation includes 1) base case kWhr use projections in 2025 conducted with Koomey Analytics based on available research and data that includes segment specific projected 2025 deployment volumes and data center power utilization effectiveness (PUE) including GPU HPC and machine learning (ML) installations and 2) AMD CPU and GPU node power consumptions incorporating segment-specific utilization (active vs. idle) percentages and multiplied by PUE to determine actual total energy use for calculation of the performance per Watt. 38x is calculated using the following formula: (base case HPC node kWhr use projection in 2025 * AMD 2025 perf/Watt improvement using DGEMM and TEC +Base case ML node kWhr use projection in 2025 * AMD 2025 perf/Watt improvement using ML math and TEC) /(Base case projected kWhr usage in 2025). For more information, https://www.amd.com/en/corporate/corporate-responsibility/data-center-sustainability.html.
  7. SP5TCO-036A: As of 05/19/2023 based on AMD Internal analysis using the AMD EPYC™ Server Virtualization & Greenhouse Gas Emission TCO Estimation Tool - version 12.15 estimating the cost and quantity of 2P AMD 96 core EPYC™ 9654 powered server versus 2P Intel® Xeon® 60 core Platinum 8490H based server solutions required to deliver 2000 total virtual machines (VM), requiring 1 core and 8GB of memory per VM for a 3-year period. This includes VMware software license cost of $6,558.32 per socket + one additional software for every 32 CPU core increment in that socket. This scenario contains many assumptions and estimates and, while based on AMD internal research and best approximations, should be considered an example for information purposes only, and not used as a basis for decision making over actual testing. For additional details, see https://www.amd.com/en/legal/claims/epyc.html#q=sp5tco-036&sortCriteria=%40title%20ascending
  8. AMD based advanced racks for AI training/inference in each year (2024 to 2030) based on AMD roadmaps, also examining historical trends to inform rack design choices and technology improvements to align projected goals and historical trends. The 2024 rack is based on the MI300X node, which is comparable to the Nvidia H100 and reflects current common practice in AI deployments in 2024/2025 timeframe. The 2030 rack is based on an AMD system and silicon design expectations for that time frame. In each case, AMD specified components like GPUs, CPUs, DRAM, storage, cooling, and communications, tracking component and defined rack characteristics for power and performance. Calculations do not include power used for cooling air or water supply outside the racks but do include power for fans and pumps internal to the racks. Performance improvements are estimated based on progress in compute output (delivered, sustained, not peak FLOPS), memory (HBM) bandwidth, and network (scale-up) bandwidth, expressed as indices and weighted by the following factors for training and inference.
    Performance Breakdown Table
      FLOPS HBM BW Scale-up BW
    Training 70.0% 10.0% 20.0%
    Inference 45.0% 32.5% 22.5%
    Performance and power use per rack together imply trends in performance per watt over time for training and inference, then indices for progress in training and inference are weighted 50:50 to get the final estimate of AMD projected progress by 2030 (20x). The performance number assumes continued AI model progress in exploiting lower precision math formats for both training and inference which results in both an increase in effective FLOPS and a reduction in required bandwidth per FLOP. https://www.amd.com/en/newsroom/press-releases/2020-6-25-amd-exceeds-six-year-goal-to-deliver-unprecedented.html
  9. AMD estimated the number of racks to train a typical notable AI model based on EPOCH AI data (https://epoch.ai). For this calculation we assume, based on these data, that a typical model takes 1025 floating point operations to train (based on the median of 2025 data), and that this training takes place over 1 month. FLOPs needed = 10^25 FLOPs/(seconds/month)/Model FLOPs utilization (MFU) = 10^25/(2.6298*10^6)/0.6. Racks = FLOPs needed/(FLOPS/rack in 2024 and 2030). The compute performance estimates from the AMD roadmap suggests that 276 racks would be needed in 2025 to train a typical model over one month using the 2024 MI300X product (assuming 22.656 PFLOPS/rack with 60% MFU) and 276-fold reduction in the number of racks to train the same model over this six-year period. Electricity use for a MI300X system to completely train a typical 2025 AI model using a 2024 rack is calculated at ~7GWh, whereas the future 2030 AMD system could train the same model using ~350 MWh, a 95% reduction. AMD then applied carbon intensities per kWh from the International Energy Agency World Energy Outlook 2024 [https://www.iea.org/reports/world-energy-outlook-2024]. IEA’s stated policy case gives carbon intensities for 2023 and 2030. We determined the average annual change in intensity from 2023 to 2030 and applied that to the 2023 intensity to get 2024 intensity (434 CO2 g/kWh) versus the 2030 intensity (312 CO2 g/kWh). Emissions for the 2024 baseline scenario of 7 GWh x 434 CO2 g/kWh ~= 3000 tonnes of CO2, versus the future 2030 scenario of 350 MWh x 312 CO2 g/kWh ~= 109 tonnes of CO2.
  10. The time period for the Digital Impact goal includes donations made after January 1, 2020 and initiated by December 31, 2025. “Initiated” is defined as AMD and the recipient organization reaching an agreement on an AMD donation, which must be delivered by July 30, 2026. Reported data includes: direct beneficiaries defined as students, faculty or researchers with direct access to AMD-donated technology, funding or volunteers; and indirect beneficiaries defined as individuals with a reasonable likelihood of receiving research data formulated through AMD-donated technology and potentially gaining useful insights or knowledge. AMD conducts annual surveys with recipient organizations to estimate direct beneficiaries, and in the case of the AI & HPC Fund, indirect beneficiaries as well. Based on 3 years of responses (2021-2023), AMD created an economic-based impact assumption to estimate the total number of indirect beneficiaries (not applied to direct beneficiaries) by dividing the total market-value of donations in a given year by the total reported indirect beneficiary values from recipients’ surveys for the same year. The data shows the ratio is 1.08 on average for the 3 years of data used in the model. Therefore, AMD assumes for every US$1m of market-value donated, approximately 1.08 million people will indirectly benefit. AMD also assumes that the annual estimated indirect beneficiaries in year 1 continues to reach additional individuals in year 2 and year 3, but at a reduced rate. The impact depreciation rate assumes year 2 beneficiaries amount to 50% of year 1 estimates, and year 3 beneficiaries amount to 25% of year 1 estimates. AMD goal calculations are third-party verified (limited level assurance) based on data supplied by recipient organizations, which is not independently verified by AMD, and AMD economic-based impact models based on data supplied by recipient organizations. The model mentioned above was extended to data from the AMD University Program (which now includes AI & HPC Fund) for 2023-2024.

© 2025, Advanced Micro Devices, Inc. All rights reserved. AMD, and the AMD Arrow logo, EPYC, Instinct, and combinations thereof are trademarks of Advanced Micro Devices, Inc.