-
- Name
- AMD Instinct™ MI300A
- Family
- Instinct
- Series
- Instinct MI300 Series
- Form Factor
- Servers
- Launch Date
- 12/06/2023
-
- GPU Architecture
- CDNA3
- Lithography
- TSMC 5nm | 6nm FinFET
- Stream Processors
- 14,592
- Matrix Cores
- 912
- Compute Units
- 228
- Peak Engine Clock
- 2100 MHz
- Peak Eight-bit Precision (FP8) Performance (E5M2, E4M3)
- 1.96 PFLOPs
- Peak Eight-bit Precision (FP8) Performance with Structured Sparsity (E5M2, E4M3)
- 3.92 PFLOPs
- Peak Half Precision (FP16) Performance
- 980.6 TFLOPs
- Peak Half Precision (FP16) Performance with Structured Sparsity
- 1.96 PFLOPs
- Peak Single Precision (TF32 Matrix) Performance
- 490.3 TFLOPs
- Peak Single Precision (TF32) Performance with Structured Sparsity
- 980.6 TFLOPs
- Peak Single Precision Matrix (FP32) Performance
- 122.6 TFLOPs
- Peak Single Precision (FP32) Performance
- 122.6 TFLOPs
- Peak Double Precision Matrix (FP64) Performance
- 122.6 TFLOPs
- Peak Double Precision (FP64) Performance
- 61.3 TFLOPs
- Peak INT8 Performance
- 1.96 POPs
- Peak INT8 Performance with Structured Sparsity
- 3.92 POPs
- Peak bfloat16
- 980.6 TFLOPs
- Peak bfloat16 with Strutured Sparsity
- 1.96 PFLOPs
- Transistor Count
- 146 Billion
-
- AMD EPYC™ CPU Architecture
- Zen 4
- CPU Cores
- 24
- CPU Peak Engine Clock
- 3700 MHz
-
- Thermal Design Power (TDP)
- 550W | 760W Peak
-
- Last Level Cache (LLC)
- 256 MB
- Dedicated Memory Size
- 128 GB
- Dedicated Memory Type
- HBM3
- Infinity Cache
- Yes
- Memory Interface
- 8192-bit
- Memory Clock
- 5.2 GHz
- Peak Memory Bandwidth
- 5.3 TB/s
- Memory ECC Support
- Yes (Full-Chip)
-
- GPU Form Factor
- APU SH5 Socket
- Bus Type
- PCIe® 5.0 x16
- Infinity Fabric™ Links
- 8
- Peak Infinity Fabric™ Link Bandwidth
- 128 GB/s
- Cooling
- Passive & Active
-
- Supported Technologies
- AMD CDNA™ 3 Architecture , AMD Infinity Architecture , AMD ROCm™ - Ecosystem without Borders
- RAS Support
- Yes
- Page Retirement
- Yes
- Page Avoidance
- Yes
- SR-IOV
- Yes
AMD Instinct™ MI300A Accelerators
AMD Instinct™ MI300A Accelerated Processing Units (APUs) combine AMD CPU cores and GPUs to fuel the convergence of HPC and AI.