Engineering the Future of AI: How AMD Interconnects, Infinity Fabric, and Advanced Packaging Drive Scalable Compute
Nov 11, 2025
News Snapshot:
- AMD’s architecture-first strategy combines advanced chiplet packaging to deliver scalable AI performance from single nodes to full rack-scale systems, high-speed SerDes interconnects, and AMD Infinity Fabric™ system-wide interconnect.
- Introduction of Fifth Generation Infinity Fabric that advances AMD system design leadership, connecting CPUs, GPUs, and accelerators from node to rack scale and forming the backbone of next-generation “Helios” rack deployments.
- Ongoing R&D investments in silicon photonics and heterogeneous packaging are paving the way for future optical connectivity, higher compute density, and improved energy efficiency across the expanding AMD AI portfolio.
At AMD Financial Analyst Day 2025, we shared how our long-term strategy, rooted in sustained R&D investment and breakthrough engineering, has positioned AMD for leadership across the most demanding AI workloads.
Central to our progress is the convergence of three critical elements that serve as the connective tissue allowing us to integrate, scale, and optimize across every layer of the AI stack: high-speed SerDes interconnects, our scalable Infinity Fabric architecture, and advanced chiplet packaging. Together, these elements form the high-performance backbone that enables our accelerated computing platforms to scale efficiently from single nodes to full rack-scale AI systems.
Building the Foundations of Scalable AI Compute
To drive meaningful AI impact, systems need to scale efficiently across entire data centers. Our holistic design approach makes this possible by co-optimizing silicon, packaging, and interconnect technologies to move data faster and smarter.
- High-Speed SerDes: AMD has experience in high-speed interconnects that move data efficiently across our chips and systems. Our roadmap includes advanced SerDes for EPYC™ CPUs and Instinct™ GPUs, supporting PCIe® 6.0 / 7.0 in the future and next-generation data fabrics that extend today’s copper interconnects and pave the way for future optical connectivity. These capabilities make large-scale AI and HPC data movement possible while improving efficiency and total cost of ownership.
Electrical I/O, however, is approaching a power-performance wall as the physical limits of copper interconnects constrain both bandwidth density and energy efficiency. To break through these barriers, AMD has been investing in silicon photonics R&D since 2017, developing optical I/O technology that has the potential to deliver orders-of-magnitude improvements in bandwidth and energy efficiency. We’re working across the ecosystem to enable this technology and believe it can unlock new levels of scalability for future AMD AI and HPC platforms.
- Infinity Fabric: What began as a CPU interconnect has evolved into a cohesive system fabric that unifies compute across heterogenous and distributed computing systems at scale. Infinity Fabric has advanced from Gen3, which powered the first exascale supercomputer, Frontier, to Gen4, which enabled the world’s fastest supercomputer, El-Capitan, and introduced true heterogeneous, chiplet-based design. Now, with Gen5, Infinity Fabric underpins a leadership product portfolio across scale-in, scale-up, and scale-out systems.
Each generation has delivered significantly higher bandwidth and greater in-package and system capabilities, enabling AI and HPC workloads to scale seamlessly while maintaining an open, standards-driven approach. As we introduce our fifth generation, Infinity Fabric serves as the backbone of upcoming AMD Instinct™ MI450 series systems with the “Helios” rack, extending our leadership from node to rack scale.
- Advanced Packaging: AMD pioneered multi-die architecture with early chiplet and 2.5D packaging innovations. While others stayed with monolithic design, AMD bet on modular scalability as the path forward, and that decision has powered our execution through to today.
We’ve built on that leadership through continuous innovation in 2.5D, 3D, and hybrid bonding. Our investments in heterogeneous packaging have enabled outpacing Moore’s Law in scaling transistor counts across our next generation Instinct accelerators, which lead the industry in compute density and bandwidth. Our modular designs allow performance and efficiency to be tailored per workload delivering scalability that’s as elegant as it is powerful.
Investing for the Future
The interconnects between our design teams are just as important as those between our chips. Over the last decade, we’ve redefined how we engineer, accelerating time-to-market while maximizing IP leverage across products and platforms. We have consistently maintained a focus on R&D, co-optimized design cycles, and sharing IP across product lines, which has built a durable model for our competitive roadmap, rapid innovation, and consistent execution.
AMD now has the broadest compute portfolio in the industry that spans CPUs, GPUs, NPUs, DPUs, and programmable FPGAs. This breadth allows us to meet customer needs across the full spectrum, from cloud and data center to edge and endpoint, and gives our partners confidence that they can scale with AMD no matter the workload.
AI is about efficiency and time-to-value, not just peak performance. Our leadership in chiplet design, advanced packaging, and high-speed interconnects enables faster inferencing and stronger energy economics across our platforms.
The race to scale generative and agentic AI workloads is only accelerating. By staying laser-focused on execution, co-design, and customer success, we’ve rapidly grown our data center GPU revenue, and we’re swiftly moving toward rack-scale deployments with “Helios” expected in 2026.
Compute density and interconnect efficiency are critical to the next era of AI infrastructure. As we advance our roadmaps and expand open initiatives across the ecosystem, AMD’s architecture-first approach will continue to deliver leadership performance, efficiency, and scalability to power the next era of AI.
CAUTIONARY STATEMENT:
This blog contains forward-looking statements concerning Advanced Micro Devices, Inc. (AMD) such as the features, functionality, performance, availability, timing and expected benefits of AMD products, including “Helios” rack-scale platform; AMD’s long-term strategy positioning AMD for leadership across the most demanding AI workloads; and AMD’s ability to power the next era of AI, which are made pursuant to the Safe Harbor provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are commonly identified by words such as "would," "may," "expects," "believes," "plans," "intends," "projects" and other terms with similar meaning. Investors are cautioned that the forward-looking statements in this presentation are based on current beliefs, assumptions and expectations, speak only as of the date of this presentation and involve risks and uncertainties that could cause actual results to differ materially from current expectations. Such statements are subject to certain known and unknown risks and uncertainties, many of which are difficult to predict and generally beyond AMD's control, that could cause actual results and other future events to differ materially from those expressed in, or implied or projected by, the forward-looking information and statements. Investors are urged to review in detail the risks and uncertainties in AMD’s Securities and Exchange Commission filings, including but not limited to AMD’s most recent reports on Forms 10-K and 10-Q.
AMD does not assume, and hereby disclaims, any obligation to update forward-looking statements made in this presentation, except as may be required by law.