Together, we’re shaping the future of AI.

Explore insights from our partners.

"Astera Labs and AMD are advancing the future of AI infrastructure through our continued collaboration on open connectivity standards. AMD leadership in the UALink Consortium, combined with Astera’s complete connectivity solutions, is paving the way for the industry to embrace open, rack-scale system architectures." 

Jitendra Mohan, CEO, Astera Labs

“Alphawave Semi is a provider of connectivity solutions for subsystems, I/O chiplets, and both custom and standard silicon products. Alphawave Semi offers products compliant with UAlink 1.0 and future releases. As a contributing member of the UAlink Consortium, Alphawave Semi is helping to create a robust specification and drive the development of an ecosystem through its investment in subsystems and UALink silicon products.”

Larrie Carr, VP Solutions Engineering, CPG, Alphawave Semi

“As AI workloads continue to evolve in complexity and scale, ASUS is committed to delivering infrastructure solutions that help our customers move faster. With support for AMD Instinct MI350X GPUs, our ESC A8A-E12U server provides a robust, future-ready platform for next-era AI and HPC deployments.”

Paul Ju, General Manager, ASUS Server Business Unit

“Auralinks AI, a division of Auradine, is developing open-standard silicon, software, and system solutions for high-performance AI networking. As part of our scale-up switch portfolio, we are fully committed to UAlink - the optimal open interconnect standard designed from the ground-up to meet the demands of AI infrastructure. With the UAlink 1.0 specification already released and momentum accelerating, we are actively collaborating with major customers and partners. We encourage all GPU and AI accelerator companies to join in building an open, interoperable AI ecosystem that will benefit the entire industry.”

Barun Kar, Founder and COO, Auradine

“We're excited to launch Clarifai AI Compute Orchestration for AMD, delivering exceptional AI acceleration on AMD Instinct GPUs, accelerated by ROCm, while simplifying infrastructure complexity for the AI industry. Developers and enterprises now get to experience the powerful, optimized compute performance combined with the flexibility to manage AI workloads with any accelerator and across any cloud or hardware environment.”​

Matthew Zeiler, CEO, Clarifai​

"Adding the AMD Instinct MI350 Series GPUs and platforms alongside the AMD Instinct MI325X offerings to our AI Innovation Cloud is another significant milestone that addresses some of our customers' most pressing desires: flexibility and choice. With the Instinct MI350's industry-leading memory capacity and bandwidth, expanded datatype support, and robust security, innovators can confidently accelerate the development and deployment of next-gen AI models much more efficiently.”

Mike LaPan, Vice President of Marketing, Cirrascale Cloud Services

“Our ongoing collaboration with AMD reinforces ClearML's commitment to delivering unparalleled support and a seamless user experience across AMD’s leading-edge hardware, including future support for the MI350 Instinct accelerators and the ROCm software stack. Together, we continue empowering organizations to unlock the full potential of AI innovation, experimentation, and production at scale.” ​

Moses Guttmann, CEO & Co-Founder, ClearML​

“Cohere is helping enterprises meaningfully improve their operations by deploying AI models and applications at scale, while protecting their valuable data. Our collaboration with AMD, and early work with AMD Instinct GPUs, strengthens our ability to deliver secure and efficient agentic systems that power critical everyday needs of companies.”

Aidan Gomez, Co-Founder and CEO, Cohere

"Building on nearly two decades of collaboration, Dell Technologies and AMD are helping organizations leverage the full potential of AI—while reimagining data centers to be more agile, sustainable and future-ready. Joint innovations like high-performance, dense rack solutions for AMD Instinct™ MI350 Series GPUs and optimized AI Scale-Out networking drive real-world breakthroughs for smarter, more efficient AI environments.”

Ihab Tarazi, SVP and CTO for ISG, Dell Technologies 

“The future of AI and HPC is not just about speed, it’s about intelligent integration and sustainable deployment. Each server we build aims to address real-world technical and operational challenges, not just push hardware specs. SG720-2A is a true collaboration with AMD that empowers customers with a stable, high-performance, and scalable compute foundation.” 

Alan Chang, Vice President of the Infrastructure Solutions Business Group at Compal

“We are seeing per device best-in-class performance, the linear scaling characteristics are extremely exciting to scale our large training workloads.”​

Ashish Vaswani, CEO, Essential AI 

“Hewlett Packard Enterprise delivers some of the world’s largest and highly-performant AI clusters with the HPE ProLiant Compute XD servers and we look forward to delivering even greater performance with the new AMD Instinct MI355X GPUs. Our latest collaboration with AMD expands our decades-long joint engineering efforts, from the edge to exascale, and continues to advance AI innovation.”

Trish Damkroger, Senior Vice President and General Manager, HPC & AI Infrastructure Solutions, Hewlett Packard Enterprise

"HUMAIN's partnership with AMD is not just another infrastructure play of the future, our teams are working together today to democratize AI at the compute level. Establishing Saudi Arabia as a global AI zone will provide low latency compute to another four billion people as well as offering the lowest cost inference."

Tareq Amin, CEO of HUMAIN

“At Infobell, we don’t just build AI—we engineer it for performance, reliability, and enterprise scale. Leveraging AMD Instinct™ MI Series GPUs and the ROCm™ software stack, we fine-tune large language models, deliver seamless multimodal AI experiences, and optimize performance across the entire stack. Our collaboration with AMD enables us to bring transformative AI solutions to enterprise customers with exceptional speed, cost-efficiency, and sustainability.”

Ramana Bandili, CEO, Infobell IT Solutions

“Marvell is committed to advancing an open, standards-based AI ecosystem, and our collaboration with AMD underscores that shared vision. As a leader in networking innovation, we’re driving the performance, scalability and efficiency essential for next-generation AI system architecture.”

Nick Kucharewski, Senior Vice President and General Manager, Cloud Platform Business Unit at Marvell 

Our partnership with AMD has been a key component in advancing Meta’s compute infrastructure. Between AMD’s CPUs and MI300X, they’re helping drive our AI experiences forward for the billions of people that use our products, and we look forward to continuing to accelerate our AI progress together.

Yee Jiun Song, VP of Engineering, Meta

“Micron is pleased to have our industry-leading HBM3E memory designed into AMD’s Instinct MI350 Series GPUs and platforms. AI training and inference tasks require dramatic data throughput, and Micron’s advanced memory helps ensure breakthrough performance and exceptional power efficiency for AI platforms.”

Praveen Vaidyanathan, Vice President and General Manager of Cloud Memory Products at Micron

“This collaboration marks a turning point. We’re not retrofitting legacy AI. We’re enabling giving developers and enterprises a frictionless way to adopt and scale agentic AI where compute is fluid, intelligence is choreographed, and AI execution is embedded into every layer of infrastructure. With this integration, AMD hardware becomes an execution-ready environment for real-time AI, optimized for business-critical workloads across diverse environments.

Fay Arjomandi, Founder and CEO of mimik

“At Microsoft, we’ve taken a systems-level approach to AI infrastructure—optimizing every layer from silicon to software to serve the most demanding AI workloads. Our deep collaboration with AMD, especially around MI300X, has enabled us to deliver great performance and efficiency for both proprietary and open-source models in production. With Azure AI Foundry, we’re applying this infrastructure to support a new wave of innovation—from large-scale training to inference for agentic workloads—giving customers the flexibility to move fast and build responsibly.”

Eric Boyd, CVP, Azure AI Platform, Microsoft

"At MiTAC Computing, we are proud to extend our partnership with AMD as we continue developing advanced, scalable, and energy-efficient server platforms. By integrating AMD's latest EPYC™ 9005 and 4005 series processors, AMD Instinct MI325X GPUs, and the upcoming MI350 series GPUs and platforms, we're enabling our global customers to unlock new capabilities in AI infrastructure and high-performance computing."

Rick Hwang, President of MiTAC Computing Technology Corp

“Oracle Cloud Infrastructure continues to benefit from its strategic collaboration with AMD. We will be one of the first to provide the MI355X rack-scale infrastructure using the combined power of EPYC, Instinct, and Pensando. We've seen impressive customer adoption for AMD-powered bare metal instances, underscoring how easily customers can adopt and scale their AI workloads with OCI AI infrastructure. In addition, Oracle relies extensively on AMD technology, both internally for its own workloads and externally for customer-facing applications. We plan to continue to have deep engagement across multiple AMD product generations, and we maintain strong confidence in the AMD roadmap and their consistent ability to deliver to expectations.” 

Mahesh Thiagarajan, Executive Vice President, Oracle Cloud Infrastructure

“PEGATRON is honored to collaborate with AMD to deliver infrastructure solutions that are purpose-built for the future of AI. With our new ultra high-density liquid-cooled MI355X AI GPU server, we are enabling customers to scale AI capabilities with efficiency, density, and speed.”

Dr. James Shue, SVP & CTO, PEGATRON 

"Our collaboration with AMD marks a pivotal step in advancing AI and HPC performance. By aligning Rapt.ai’s intelligent infrastructure management platform with the next-generation AMD Instinct MI350 accelerators, we’re unlocking unprecedented levels of efficiency, scalability, and compute power. This joint innovation positions us to support the growing demands of AI workloads—both now and in the future—while driving industry-wide acceleration in AI development, deployment, and research breakthroughs.”

Charles Leeming, CEO, Rapt.ai​

"From the early days of x86 64-bit architecture to the new frontier of enterprise AI, Red Hat’s collaboration with AMD is about more than just technology; it's about empowering customer choice. By delivering open, efficient and scalable AI through our work together with vLLM and llm-d, Red Hat and AMD are making production-level inference a reality, equipping enterprises with the necessary freedom and performance to unlock the immense value of generative AI on any model and any cloud."

Chris Wright, Chief Technology Officer and Senior Vice President, Global Engineering, Red Hat

“Samsung’s HBM3E delivers the capacity and bandwidth the most advanced GPUs demand – 288GB and up to 8TB/s to fuel data-intensive AI and HPC workloads. We’re proud to support AMD Instinct MI350X and MI355X GPUs and the continued innovation made possible through our long-standing partnership.”

Paul Cho, President of Samsung Semiconductor, US

"Supermicro continues to lead the industry with the most experience in delivering high-performance systems designed for AI and HPC applications. Our Data Center Building Block Solutions® enable us to quickly deploy end-to-end data center solutions to market, bringing the latest technologies for the most demanding applications. The addition of the new AMD Instinct MI350 series GPUs to our GPU server lineup strengthens and expands our industry-leading AI solutions and gives customers greater choice and better performance as they design and build the next generation of data centers."

Charles Liang, President and CEO of Supermicro

“TensorWave’s deep specialization in AMD technology makes us the most optimized environment for next-gen AI workloads. With the Instinct MI325X now deployed on our cloud and Instinct MI355X coming soon, we’re enabling startups and enterprises alike to achieve up to 25% efficiency gains and 40% cost reductions, results we’ve already seen with customers using our AMD-powered infrastructure.”

Piotr Tomasik, Co-Founder and President, TensorWave

"AMD MI355X GPUs are designed to meet the diverse and complex demands of today’s AI workloads, delivering exceptional value and flexibility. As AI development continues to accelerate, the scalability, security, and efficiency these GPUs deliver are more essential than ever. We are proud to be among the first cloud providers worldwide to offer AMD MI355X GPUs, empowering our customers with next-generation AI infrastructure."

J.J. Kardwell, CEO, Vultr

XConn Technologies is excited to partner with AMD to support the UAlink Ecosystem by expanding their existing large-lane PCIe & CXL switch portfolio to include a high-radix UALink switch to support scale-up connectivity. With 200G per lane, this high-radix sswitch will support an open and vibrant ecosystem of up to 1,024 GPUs & Accelerators with the lowest latency and highest bandwidth. We are excited about the innovations that lay ahead in this open UAlink ecosystem.

Gerry Fan, CEO, XConn Technologies