Enabling today.
Inspiring tomorrow.

 AMD Compute Cores

​​​​A new era of computing

AMD enables CPU and GPU cores to work together on a single APU chip

Compute Cores Diagram

​What are compute cores?

Historically, cores process data inside a processor. The more cores you have working together, the faster a computer will perform tasks. CPU cores were designed for serial tasks like productivity applications, while GPUs were designed for more parallel and graphics-intensive tasks like video editing, gaming and rich Web browsing.

AMD’s latest revolutionary processing architecture, Heterogeneous System Architecture (HSA), bridges the gap between CPU and GPU cores and delivers a new innovation called compute cores.

This groundbreaking technology allows CPU and GPU cores to speak the same language and share workloads and the same memory. With HSA, CPU and GPU cores are designed to work together in a single accelerated processing unit (APU) , creating a faster, more efficient and seamless way to accelerate applications while delivering great performance and rich entertainment.

Learn more about the benefits of HSA and compute cores

HSA, hUMA and hQ: revolutionizing memory and computation

This revolutionary new architecture, HSA, now allows the CPU and GPU on the same APU to share data and access the same memory, making communication between the two faster and more efficient. This technology is called heterogeneous uniform memory access or “hUMA.”

Another big breakthrough for this architecture is heterogeneous queuing, or “hQ,” which revolutionizes how processors inside an APU interact with each other to handle computational tasks. Before HSA, the CPU was the master and the GPU was the slave, but now they can both assign and perform tasks, making them equal partners and enabling workloads to be managed by the best suited core for a specific task.

A new term for a new kind of core

Customers traditionally  thought of processors (CPUs) in terms of the number of cores on each processor and the GPU’s potential was rarely understood. This CPU-centric approach provided customers a basis upon which to compare a PC’s processing capability.

Today, however, AMD’s APUs perform work differently. By utilizing revolutionary HSA with hUMA and hQ technologies, AMD's APUs use the cores of both the CPUs and GPUs to functionally do work and even accelerate today’s modern workloads.  The question for this new architecture and new approach to processing is how to now talk about the combined total number of cores (CPU + GPU) on any given APU .

To answer this question, AMD has developed the term "compute core." The definition of the term "compute core" describes consistently and transparently the total number of cores available to do work or run applications on our next-generation accelerated processors.

A compute core is: any core capable of running at least one process in its own context and virtual memory space, independently from other cores.

Based on the above definition, a single compute core can be either a CPU core or GPU core.

chart

​To illustrate, we can describe a given APU, such as the ​AMD A10-7850K APU, as having 12 compute cores, consisting of 4 CPU cores and 8 GPU cores.

​Compute core nomenclature

Beginning in 2014 with the first generation of AMD 7000-series  heterogeneous accelerated processors based on the architecture of the APU,  AMD began designating the number of compute cores in the following manner:

AMD A10-7850K APU with AMD Radeon™ R7 graphics

  • 12 compute cores (4 CPU + 8 GPU)

​For a deeper dive into compute core characteristics, read our compute cores whitepaper

Learn more about HSA
 
Footnotes