AMD “Helios’’: Advancing Openness in AI Infrastructure Built on Meta’s 2025 OCP Open Rack for AI Design
Oct 14, 2025
		Introduction
Today at the Open Compute Project (OCP) Global Summit in San Jose, California, Meta introduced specifications for a new Open Rack for AI featuring an Open Rack Wide (ORW) form factor — marking a major leap forward in open infrastructure innovation.
Designed to meet the realities of AI-scale data centers, the ORW specification defines an open, double-wide rack optimized for the power, cooling, and serviceability demands of next generation AI systems. It represents a foundational shift toward standardized, interoperable, and scalable data center design across the industry.
AMD is proud to align with Meta and the Open Compute Project community in advancing this vision through “Helios” — the most advanced rack-scale reference system from AMD, built fully on the ORW open standards. “Helios” extends the AMD philosophy of openness from silicon to system to rack to large-scale clusters, bringing to life the open hardware principles that underpin the ORW specification.
“Helios”: Turning Open Standards into Rack-Scale Reality
The AMD “Helios” AI rack is built on the blueprint of open design submitted by Meta at OCP 2025 to enable optimized, deployable performance across AI data centers. Built around the next-generation AMD Instinct™ MI450 Series GPUs, “Helios” redefines what open, rack-scale AI infrastructure can achieve.
Powered by the AMD CDNA™ architecture, each MI450 Series GPU delivers up to 432 GB of HBM4 memory and 19.6 TB/s of memory bandwidth, providing industry-leading capacity and bandwidth for data-hungry AI models. At rack scale, a “Helios” system with 72 MI450 Series GPUs delivers up to 1.4 exaFLOPS of FP8 and 2.9 exaFLOPS of FP4 performance, with 31 TB of total HBM4 memory and 1.4 PB/s of aggregate bandwidth — a generational leap that enables trillion parameter training and large scale AI inference.
“Helios” also features up to 260 TB/s of scale-up interconnect bandwidth and 43 TB/s of Ethernet-based scale-out bandwidth, helping ensure seamless communication across GPUs, nodes, and racks. “Helios” delivers up to 36× higher performance compared to previous generations1, while offering 50% more memory capacity than NVIDIA’s Vera Rubin system.
		This is the first rack-scale design from AMD engineered specifically for frontier AI workloads providing hyperscalers and enterprises a future-proof, open-standard based platform that unites power, flexibility, and interoperability.
Rack-scale systems like “Helios” are essential for the next generation of AI, where performance depends on efficient communication across thousands of accelerators. AMD leadership in open standards such as the Open Compute Project (OCP), Ultra Accelerator Link (UALink™), and Ultra Ethernet Consortium (UEC) helps ensure that this scaling happens through industry collaboration — enabling open, high-performance fabrics for both scale-up and scale-out AI clusters. Together, these efforts define the path toward interoperable, energy-efficient infrastructure built for the AI era.
Driving Open Innovation Across the Ecosystem
The “Helios” rack is more than a hardware reference — it’s a collaboration blueprint for the AI ecosystem.
Built on the ORW specification submitted by Meta to OCP, “Helios” enables OEM and ODM partners to:
- Adopt and extend the “Helios” reference design, accelerating time-to-market for new AI systems.
 - Integrate AMD Instinct™ GPUs, EPYC™ CPUs, and Pensando™ DPUs with their own differentiated solutions.
 - Participate in an open, standards-based ecosystem that drives interoperability, scalability, and long-term innovation.
 
By aligning around the ORW specification, the industry gains a shared, open foundation for rack-scale AI deployments — reducing fragmentation and removing the inefficiencies of proprietary, one-off designs.
Purpose-Built for Modern Data Center Realities
AI data centers are evolving rapidly, demanding architectures that deliver greater performance, efficiency, and serviceability at scale. “Helios” is purpose-built to meet these needs with innovations that simplify deployment, improve manageability, and sustain performance in dense AI environments.
- Higher scale-out throughput and HBM bandwidth compared to previous generations enable faster model training and inference.
 - Double-wide layout reduces weight density and improves serviceability.
 - Standards-based Ethernet scale-out ensures multipath resiliency and seamless interoperability.
 - Backside quick-disconnect liquid cooling provides sustained, efficient thermal performance at high density.
 
Together, these features make the AMD “Helios” Rack a deployable, production-ready system for customers scaling to exascale AI — delivering breakthrough performance with operational efficiency and sustainability.
Enabling the Openness in AI Infrastructure Revolution
With “Helios”, AMD extends its open hardware and software leadership to the rack level — uniting silicon innovation with open, industry-driven design principles.
For OEMs and ODMs, “Helios” provides a ready-made, OCP-aligned system to build differentiated AI infrastructure.
For customers, it means faster deployment, lower risk, and more flexibility in how they scale compute for AI, HPC, and sovereign initiatives.
On Track for 2026
“Helios” is currently being released as a reference design to OEM and ODM partners, with volume deployment expected in 2026. As an open, OCP-aligned design, “Helios” creates new opportunities for the ecosystem to collaborate on the future of AI infrastructure — one built on openness, interoperability, and shared innovation.
Built on the ORW specifications submitted by Meta to Open Compute Project, “Helios” embodies AMD’s commitment to open, collaborative innovation — shaping the next phase of AI infrastructure and proving that when the industry builds together, everyone accelerates.
Footnotes
- Based on engineering projections by AMD Performance Labs in September 2025, to estimate the peak theoretical precision performance of seventy-two (72) AMD Instinct™ MI450 Series GPUs (Rack) using FP4 dense Matrix datatype vs. an 8xGPU AMD Instinct MI355X platform using the FP4 dense Matrix datatype. Results subject to change when products are released in market. MI350-047A