AMD Enterprise AI Suite Expands with DeepSeek and Mistral AI models - Adds Support for AMD Instinct™ MI350X and MI355X GPUs

Mar 10, 2026

Blue abstract background with the text "February release", AMD logo, Enterprise AI Suite

Version 1.8

AMD announces version 1.8 of the open-source AMD Enterprise AI Suite with an extended AMD Inference Microservices (AIMs) catalog, including models from DeepSeek and Mistral AI, and optimized performance for AMD Instinct™ MI350X and MI355X GPUs.

The AMD Enterprise AI Suite is a modular, open-source platform allowing enterprises to scale AI from experimentation to production in a matter of minutes.

Extending the AIMs Catalog with More AI Models

As AI models and frameworks continue to evolve, deploying and operating AI inference reliably, efficiently, and at scale remains challenging for enterprises. AMD Inference Microservices (AIMs) simplify and accelerate AI model deployment by providing flexible, high-performance building blocks for serving frontier AI models and developing agentic AI pipelines.

Adding to the existing AIMs catalog, provided through the AMD Enterprise AI Suite’s AI Workbench component, version 1.8 of the Suite includes 6 additional models from two model families, DeepSeek and Mistral AI.

AIM Catalog of AI Models
Existing models  Added in version 1.8 
Cohere Labs    Command A Reasoning 08 2025 DeepSeek    DeepSeek R1  
 DeepSeek R1 0528  
 DeepSeek V3.1
 DeepSeek V3.1 Terminus
Qwen AI  Qwen3 235B A22B Mistral AI  Ministral 3 14B Instruct 2512
 Qwen3 32B  Mistral Large 3 675B Instruct 2512
Meta  Llama 3.1 405B Instruct  
 Llama 3.1 8B Instruct
 Llama 3.2 1B Instruct
 Llama 3.2 3B Instruct
 Llama 3.3 70B Instruct
Mistral AI  Mistral Small 3.2 24BInstruct 2506    
 Mixtral 8x22B Instruct v0.1
 Mixtral 8x7B Instruct v0.1
OpenAI  GPT OSS 120B  
 GPT OSS 20B

As AMD continues to expand the AIMs model catalog, and the AMD Solution Blueprints, customers benefit from a standardized, performance optimized and maintained selection of frontier AI models and reference applications for inference on AMD compute platforms.

Running Frontier AI Models on Next Generation Computing Hardware

With AMD Enterprise AI Suite support for the AMD Instinct MI350 Series, customers can now easily deploy the frontier AI models available through the AIM catalog on cutting-edge AI accelerators.

The Instinct MI350 Series GPUs are next-generation data center accelerators designed for AI inference and training. Built on the new 4th Gen AMD CDNA™ architecture, these GPUs deliver exceptional efficiency and performance for training massive AI models, high-speed inference, and complex HPC workloads like scientific simulations, data processing, and computational modeling.

The AMD Enterprise AI Suite’s model serving capabilities and effective workload management, including dynamic GPU utilization through the AMD Resource Manager, enables enterprises to maximize the utilization and ROI of their Instinct GPU cluster.

The enterprise AI solutions from AMD spans the entire stack, including both hardware and software, which allow enterprises to build sovereign self-hosted AI roadmaps that empower them to innovate freely and scale efficiently.

Learn more about the Suite and version 1.8 in the release notes, on the AMD Enterprise AI Suite page, or try it out on AMD Developer Cloud.

 

Share:

Article By


Related Blogs