The Road to ROCm on Radeon for Windows and Linux

Sep 24, 2025

AMD ROCm

At Computex this year, I shared our commitment to making ROCm a true cross-platform, developer-first stack. I said we’d bring ROCm to Radeon on Windows and Linux in the second half of 2025. I’m proud to say that today, we’re delivering on that promise.

PyTorch on Windows and Linux is now available as a public preview. This release builds out the ROCm on Windows and Linux stacks by bringing native PyTorch support to Windows and Linux users running on AMD Radeon 7000 Series and 9000 Series GPUs and limited Ryzen AI 300 and AI Max APUs. For developers, this means you can now run AI inference workloads directly on AMD hardware in the Windows environment - no workarounds, no dual-boot setups, no compromises.

We’ve heard developers loud and clear: you want flexibility to work across operating systems, consistency in framework support, and the confidence that new hardware will be supported from day one. With this release, we’re showing that we’re listening - and executing.

This is more than a port. It’s a proof point of the shift AMD is making when it comes to ROCm. We’re not just building for the data center anymore. We’re accelerating our efforts to support client developers, creators, and innovators across the full software stack.

Windows and Linux users can now  take advantage of ROCm-enabled PyTorch wheels for AI model development. Whether you’re using ComfyUI to create visuals, prototyping a new LLM-based app, or pushing the limits of local large parameter models - ROCm is now in your hands.

Yes, it’s a preview release. That means performance and feature coverage will continue to improve. We’re not done. But we’ve shipped the foundation. And we’re inviting developers to help shape what comes next.

You can download PyTorch for Windows and Linux today at repo.radeon.com. This milestone reflects where we’re headed: faster releases, broader support, deeper engagement with the open-source community. That’s how we’ll make ROCm not just viable, but vital to researchers, to creators, to every developer building the next wave of AI.

Quick Links to Get Started

Get started with ROCm 

Deploying LLMs with AMD on Windows using PyTorch 

Share:

Article By


SVP and Chief Software Officer

Related Blogs