Powering Scale-out AI Infrastructure

The AMD Pensando™ Pollara 400 AI NIC is engineered to accelerate applications running across back-end networks, achieving up to 400 Gigabit per second (Gbps) Ethernet speeds.

Built on the proven third generation, fully hardware programmable Pensando P4 engine, the AMD Pensando Pollara 400 AI NIC delivers industry-leading performance with the flexibility to be programmed to meet future requirements, helping to maximize infrastructure investments for enterprises, cloud service providers, and researchers. 

Ultra Ethernet Consortium logo

Industry’s First AI NIC Offering Ultra Ethernet Consortium (UEC) Features

The AMD Pensando™ Pollara 400 AI NIC is the industry's first Ultra Ethernet Consortium (UEC) compatible AI NIC. With its programmability, the AMD AI NIC™ enables customers to select UEC features to bring intelligence to network monitoring and performance tuning. Through the fully programmable P4 engine, the AMD AI NIC allows customers to upgrade any Pensando Pollara 400 AI NIC to meet new industry standards, including those established by the UEC.

Accelerate AI Performance at Scale

AI Workload Performance

With 400 Gbps GPU-GPU communication speeds, the AMD Pensando™ Pollara 400 AI NIC can accelerate job completion times while training the largest AI models, deploying the next Gen AI model, or researching cutting-edge advancements with networking designed to accelerate AI workloads.

Lower Capex

Designed to meet the needs of AI workloads today and tomorrow, the AMD Pensando™ Pollara 400 AI NIC is compatible with an open ecosystem, allowing customers to lower capex, while remaining flexible to future infrastructure scalability. 

Intelligent Network Monitoring

Save time on traditional network monitoring and performance tuning tasks. The AMD Pensando™ Pollara 400 AI NIC load balances networks while monitoring network metrics, allowing teams to proactively identify and address potential network issues before they escalate into critical disruptions.

Intelligent Network Monitoring and Load Balancing

Intelligent Packet Spray

Intelligent packet spray enables teams to seamlessly optimize network performance by enhancing load balancing, boosting overall efficiency, and scalability. Improved network performance can significantly reduce GPU-to-GPU communication times, leading to faster job completion and greater operational efficiency.

AI technology concept
Out-of-order Packet Handling and In-order Message Delivery

Ensure messages are delivered in the correct order, even when employing multipathing and packet spraying techniques. The advanced out-of-order message delivery feature efficiently processes data packets that may arrive out of sequence, seamlessly placing them directly into GPU memory without the need for buffering.

Programming code abstract technology background of software developer and  Computer script
Selective Retransmission

Boost network performance with selective acknowledgment (SACK) retransmission, which ensures only dropped or corrupted packets are retransmitted. SACK efficiently detects and resends lost or damaged packets, optimizing bandwidth utilization, helping reduce latency during packet loss recovery, and minimize redundant data transmission for exceptional efficiency.

Abstract illustration of a data stream
Path-Aware Congestion Control

Focus on workloads, not network monitoring, with real-time telemetry and network-aware algorithms. The path-aware congestion control feature simplifies network performance management, enabling teams to quickly detect and address critical issues while mitigating the impact of incast scenarios. 

Abstract data center concept
Rapid Fault Detection 

With rapid fault detection, teams can pinpoint issues within milliseconds, enabling near-instantaneous failover recovery and significantly reducing GPU downtime. Tap into elevated network observability with near real-time latency metrics, congestion and drop statistics. 

Digital cyberspace and digital data network connections

Boost AI Performance and Network Reliability

Up to
15% Faster AI Job Performance 1

Enhance runtime performance by approx. 15% for certain applications. With features including intelligent network load balancing, fast failover and loss recovery, the AMD Pensando Pollara 400 AI NIC helps accelerate workloads while maximizing AI investments. 

Up to
10% Improved Network Reliability 2

Gain up to 10% improved network uptime. With the AMD Pensando Pollara 400 AI NIC, minimize cluster down-time while increasing network resilience and availability with state-of-the-art RAS and fast failure recovery.  

AMD Pensando™ 400 AI NIC Specifications

Maximum Bandwidth  Form Factor Ethernet Interface  Ethernet Speeds Ethernet Configurations  Management
400 Gbps Half-height, half-length  PCIe® Gen5.0x16 25/50/100/200/400 Gbps

Supports up to 4 ports
- 1 x 400G
- 2 x 200G
- 4 x 100G
- 4 x 50G
- 4 x 25G

MCTP over SMBus

Explore the full suite of AMD networking solutions designed for high-performance modern data centers.

Resources

Unlock the Future of AI Networking

Learn how the AMD Pensando Pollara 400 AI NIC can transform your scale-out AI Infrastructure.

Footnotes
  1. Dong, Jianbo & Luo, Bin & Zhang, Jun & Zhang, Pengcheng & Feng, Fei & Zhu, Yikai & Liu, Ang & Chen, Zian & Shi, Yi & Jiao, Hairong & Lu, Gang & Guan, Yu & Zhai, Ennan & Xiao, Wencong & Zhao, Hanyu & Yuan, Man & Yang, Siran & Li, Xiang & Wang, Jiamang & Fu, Binzhang. (2024). Boosting Large-scale Parallel Training Efficiency with C4: A Communication-Driven Approach. 10.48550/arXiv.2406.04594. Boosting Large-scale Parallel Training Efficiency with C4: A Communication-Drive Approach https://arxiv.org/pdf/2406.04594. Claim reflects technology used in AMD Pensando Pollara 400 NICs, however testing and data not specific to the Pollara 400. Results may vary.
  2. Dubey, Abhimanyu & Jauhri, Abhinav & Pandey, Abhinav & Kadian, Abhishek & Al-Dahle, Ahmad & Letman, Aiesha & Mathur, Akhil & Schelten, Alan & Yang, Amy & Fan, Angela & Goyal, Anirudh & Hartshorn, Anthony & Yang, Aobo & Mitra, Archi & Sravankumar, Archie & Korenev, Artem & Hinsvark, Arthur & Rao, Arun & Zhang, Aston & Zhao, Zhiwei. (2024). The Llama 3 Herd of Models. 10.48550/arXiv.2407.21783. Meta Research Paper, “The Llama 3 Herd of Models, Table 5.  Claim reflects technology used in AMD Pensando Pollara 400 NICs, however testing and data not specific to the Pollara 400. Results may vary.