PyTorch 2.9 Wheel Variant Support Expands to ROCm

Oct 15, 2025

Rows of glowing 1s and 0s in blue and orange tones, representing digital data and programming.

Python Packaging Challenges with Multiple Accelerator Support

Anyone who has worked with the Python language is familiar with the process of installing Python packages (also known as “wheels”) using pip, uv or similar tools. Such packages can provide high performance implementations of extended functionality as pre-built software which the user installs without a potentially lengthy and complex compilation step. One challenge of delivering a pre-built package is the need to properly select the build which is appropriate for the Python version, operating system and hardware on the target system. The wheel specification as it exists today addresses most of this, but notably does not provide a mechanism to distinguish between builds targeting different accelerator architectures.

PyTorch is one of the most widely adopted tools for AI development. Its use of hardware acceleration, especially via GPUs, is fundamental to its utility, but this makes it particularly prone to the issues facing Python packaging in general.

Current Solutions

PyTorch addresses the above problem by maintaining a matrix of different platforms and their supported versions on the Get Started page. This solution involves maintaining separate python package indices to host the packages for each platform and version. For example AMD ROCm™ Software 6.4-supported packages are hosted at https://download.pytorch.org/whl/rocm6.4. However, this requires the user to evaluate their environment and make the right choice.

Installation

Figure 1: ROCm Installation. Source: PyTorch

Other projects attempt to solve the same problem by offering differently named packages for each target configuration, but that introduces other complexities for the user.

What Are Variant Wheels and How Do They Solve the Problem?

Variant wheels are part of the proposed WheelNext python packaging standard. They have metadata embedded in them that identifies them as supporting specific features of variants for a particular package. In the case of PyTorch, the variants could be ROCm, CUDA, XPU etc., and the features could be the specific versions of software or hardware supported by each variant. When used with a variant-aware build of uv, the user can install the variant wheel appropriate for their environment without needing to explicitly specify the wheel index or a different package name. In essence, the install command boils down to something as simple as: uv pip install torch.

For more details about the WheelNext proposal pertaining to variant wheels, please refer to this and this.

Provider Plugins 

Provider plugins play a key role in the variant wheel ecosystem by detecting the software and hardware configuration in the user’s environment relevant to the platform supported by the plugin. A variant-aware build of uv will query the installed provider plugins to detect the system’s capabilities and choose an appropriate wheel to install.

The AMD provider plugin currently detects the ROCm version and GPU architectures in the user’s environment. If a wheel with support for the detected configuration exists, it is selected by the installer, otherwise a fallback null-variant with CPU-only support is installed.

Installation Instructions for PyTorch Variant Wheels for ROCm:

Although this remains an experimental feature, PyTorch 2.9 introduces ROCm support for wheel variants, invoked as follows:

Linux x86:

curl -LsSf https://astral.sh/uv/install.sh | INSTALLER_DOWNLOAD_URL=https://wheelnext.astral.sh/v0.0.2 sh

uv pip install torch torchvision

Running the above command in an environment with a ROCm6.3 or ROCm6.4 installation will automatically pick up the PyTorch2.9 wheel built with the installed version of ROCm, provided that your AMD GPU is supported by the variant wheel. The list of supported AMD GPUs is as follows:

gfx1030, gfx1100, gfx1101, gfx1102, gfx1200, gfx1201, gfx900, gfx906, gfx908, gfx90a, gfx942.

The snippets below show some of the most relevant output when the install command is invoked with the verbose (-v) option.

    DEBUG Searching for a compatible version of amd-variant-provider (>=0.0.2, <1.0.0)

DEBUG Selecting: amd-variant-provider==0.0.2 [compatible] (amd_variant_provider-0.0.2-py3-none-any.whl)   
    DEBUG Installing in amd-variant-provider==0.0.2 in /tmp/.tmp56CZCT/wheelnext-builds-v0/.tmptof3wY                                                                         

DEBUG Must revalidate requirement: amd-variant-provider                                                                                                                   

DEBUG Downloading and building requirement for build: amd-variant-provider==0.0.2                                                                                         

DEBUG No cache entry for: https://wheelnext.github.io/amd-variant-provider/amd-variant-provider/amd_variant_provider-0.0.2-py3-none-any.whl 
    DEBUG INFO:amd_variant_provider.plugin:[amd-variant-provider] Running system detection.                                                                                   

DEBUG INFO:root:Found GFX architectures: ['gfx908', 'gfx90a'] via `rocminfo`.                                                                                             

DEBUG INFO:root:Found rocm7.0.0 via version file: /opt/rocm/.info/version                                                                                                 

DEBUG INFO:amd_variant_provider.plugin:[amd-variant-provider] Detected features: [VariantFeatureConfig(name='gfx_arch', values=['gfx908', 'gfx90a'], multi_value=True), VariantFeatureConfig(name='rocm_version', values=['6.4'], multi_value=False)]                                                                                              

DEBUG \x1b[31mUsing variant wheel\x1b[39m torch-2.9.0-cp310-cp310-manylinux_2_28_x86_64-rocm6.4.whl                                                                       

DEBUG Selecting: torch==2.9.0 [compatible] (torch-2.9.0-cp310-cp310-manylinux_2_28_x86_64-rocm6.4.whl) 
    Downloading networkx                                                                                                                                                     

 Downloading sympy                                                                                                                                                        

 Downloading pytorch-triton-rocm                                                                                                                                          

 Downloading torch                                                                                                                                                        

Prepared 10 packages in 2m 45s                                                                                                                                            

Installed 10 packages in 1.62s                                                                                                                                            

 + filelock==3.20.0                                                                                                                                                       

 + fsspec==2025.9.0                                                                                                                                                       

 + jinja2==3.1.6                                                                                                                                                          

 + markupsafe==3.0.3                                                                                                                                                      

 + mpmath==1.3.0                                                                                                                                                          

 + networkx==3.4.2                                                                                                                                                        

 + pytorch-triton-rocm==3.5.0                                                                                                                                             

 + sympy==1.14.0                                                                                                                                                          

 + torch==2.9.0                                                                                                                                                           

 + typing-extensions==4.15.0                             

Notably, PyTorch builds for other accelerator architectures supported with variant wheels are installed in exactly the same way. This offers developers the possibility of hardware-agnostic scripting of PyTorch installation across AMD GPUs and hardware from other supported vendors without relying on special conditional statements.

Engineering Note:

For testing, or to override the default plugin detection behavior, one can force the detected ROCm version and/or the AMD GPU gfx arch for the amd-variant-provider plugin using:

export AMD_VARIANT_PROVIDER_FORCE_GFX_ARCH="gfx1100"
export AMD_VARIANT_PROVIDER_FORCE_ROCM_VERSION="6.4.1"

Frequently Asked Questions:

Q. I have ROCm7.0 installed. Are variant wheels available for it?

A. No, PyTorch 2.9 only has variant wheels available for ROCm6.3 and ROCm6.4. Since the AMD provider plugin looks for an exact match of installed ROCm major/minor version to the version supported by the variant wheels available, running the above commands in a ROCm7.0 or newer environment will install a CPU-only wheel. You can use the override mechanism specified below to force the installer to pick a specific variant wheel, if desired.

Alternately, PyTorch nightly non-variant wheels are available with ROCm7.0 support:

pip3 install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/rocm7.0

Q. I have a system with multiple AMD GPUs, some of which are not in the list of GPUs supported by PyTorch variant wheels for ROCm. Will the installation instructions work for me?

A. On systems which have more than one AMD GPU installed, and one of them is *not* supported by the ROCm variant wheels, the above installation command will still pick up a variant wheel which supports at least one of the installed AMD GPUs.

It is recommended to use HIP_VISIBLE_DEVICES to limit the visible devices to only the ones supported by the ROCm variant wheels, so as to avoid unexpected runtime errors due to lack of GPU support in the wheels.

Share:

Article By


Related Blogs