Radeon Instinct™ MI25 Accelerator

Superior Training Accelerator for Machine Intelligence and Deep Learning

Based on cutting-edge “VEGA” GRAPHICS ARCHITECTURE built to handle big data sets and diverse compute workloads

64 nCU COMPUTE UNITS to accelerate demanding workloads

Up to 12.3 TFLOPS of peak FP32 compute performance to speed up compute intensive machine intelligence

Up to 24.6 TFLOPS of FP16 peak compute performance for deep learning training applications

State-of-the-art memory technology: 16GB of HBM2 MEMORY with ECC2 and HIGH BANDWIDTH CACHE CONTROLLER (HBCC)

Passively cooled, full-height, dual-slot, 300W TDP board power – designed to fit in most standard server designs

MxGPU for Virtualized Compute Workloads – drive greater utilization and capacity in the data center

Advanced Remote Manageability Capabilities, for simplified GPU monitoring in large scale systems

ROCm logo

ROCm, a New Era in Open GPU Computing

Platform for GPU-enabled HPC and ultrascale computing.

The Ultimate Solution for Your Computing Needs

machine intelligence digital brain abstract image

Machine Intelligence and Deep Learning Neural Network Training 

  • High performance FP16 and FP32 compute
  • Open software ROCm platform for HPC-class rack scale
  • Optimized MIOpen deep learning framework libraries
  • Large BAR support for mGPU peer to peer
  • Superior compute density and performance per node when combining AMD EPYC™ server processor and Radeon Instinct™ accelerators
HPC

HPC Heterogeneous Compute

  • Outstanding compute density and performance per node
  • Open software ROCm platform for HPC class rack scale
  • Open source Linux drivers, HCC compiler, tools and libraries from the metal forward
  • Open industry standard support of multiple architectures and industry standard interconnect technologies

Instinct™ MI25

GPU Specifications
GPU Architecture
Vega
Lithography
14nm FinFET
Stream Processors
4096
Compute Units
64
Peak Half Precision (FP16) Performance
24.6 TFLOPs
Peak Single Precision (FP32) Performance
12.29 TFLOPs
Peak Double Precision (FP64) Performance
768 GFLOPs
GPU Memory
Memory Size
16 GB
Memory Type (GPU)
HBM2
Memory Bandwidth
484 GB/s
Memory ECC Support
Yes
Board Specifications
Form Factor
PCIe Add-in Card
Bus Type
PCIe 3.0 x16
TDP
300W
Cooling
Passive
Board Width
Double Slot
Board Length
10.5" (267 mm)
Board Height
Full Height
External Power Connectors
2x PCIe 8-pin
Additional Features
Supported Technologies
OpenCL 2.0
High Bandwidth Cache (HBC) Controller
Solutions
HPC and Machine Intelligence
Software API Support
DirectX
12.0 (feature level 12_1)
OpenGL
4.6
OpenCL
2.0
Vulkan
1.0
Product Basics
Product Family
Radeon Instinct™
Product Line
Radeon Instinct™ MI Series
Platform
Server
Launch Date
June 2017