Achieve Continuous, Data-Driven Instance Sizing for Your AMD Powered Amazon EC2 Instances
Nov 21, 2025
Many AWS customers struggle to keep their licensing costs under control, especially as their workloads grow and evolve. You’ve probably realized just how tricky managing cloud licensing can be. Do it right, and it can save your organization thousands of dollars or more each year. Do it wrong, and you may suffer unnecessary costs or compliance headaches.
The foundation of effective license management on AWS is the Optimization and Licensing Assessment (OLA), an analysis that’s offered through OLA-certified partners such as Evolve Cloud Services. An OLA examines how your cloud resources are used, which third-party software licenses are active, and how your applications depend on one another. The result is a clear, data-driven view of how and where you can optimize both your infrastructure and your licensing.
There’s one big catch, however: an OLA only represents a single moment in time. AWS is constantly improving its software, releasing new instance types, and adjusting its pricing. Optimization, therefore, isn’t something you do once and then forget. It should be a continuous process, especially in a pay-as-you-go cloud model, where the less you use, the less you pay.
AWS’s Built-in Tools Don’t Tell the Whole Story
AWS provides several tools for optimization, including AWS Trusted Advisor and AWS Compute Optimizer, but both can only recommend instances within the same CPU vendor family.
For example, if you’re running an m6i.large (Intel-based) instance, AWS Compute Optimizer might suggest another Intel option. However, it won’t recommend switching to a more cost-effective AMD equivalent, such as m6a.large,which typically costs about 10% less.This limitation means you could be missing out on half of the potential cost-saving opportunities available within the x86 instance family.
It’s a critical consideration because CPU performance can vary dramatically between vendors and generations. According to AWS, M8a instances deliver up to 30% better performance per vCPU compared to M7a, while M7a offers up to 50% higher performance than M6a. Those are substantial differences and clear proof that not all vCPUs are created equal. In practice, it means that careful instance sizing is critical to getting the best performance for every dollar spent.
The “T-Shirt” Sizing Problem
AWS instances come in fixed “T-shirt sizes” that pair specific ratios of vCPUs to memory:
- M series: 1 vCPU : 4 GiB RAM
- C series: 1 vCPU : 2 GiB RAM
- R series: 1 vCPU : 8 GiB RAM
This rigid structure often doesn’t match what workloads actually need. For example, what if you have a Microsoft Windows Server workload that requires 128 GiB RAM but only 24 vCPUs? Two obvious choices might be the AMD m8a.8xlarge and the Intel m8i.8xlarge, both offering 32 vCPUs and 128 GiB RAM, but in both cases, you’d be paying for more CPU capacity (and licenses) than you need.
This is where AWS’s Optimize CPU comes in. With the AMD m8a.8xlarge, you can reduce the vCPU count to 24 and save on Windows Server licensing costs without over-provisioning. The Intel equivalent, m8i.8xlarge, on the other hand, caps its Optimize CPU setting at 16 vCPUs, forcing you to move up to a larger instance (like the m8i.12xlarge) to meet the same requirement.
In both cases, optimizing the vCPU count saves you around $3,224 per year by cutting eight Windows Server licenses, but the AMD instance delivers the result at a lower infrastructure cost.
Understanding Optimize CPU
AWS introduced Optimize CPU in May 2018 to help its customers fine-tune their CPU allocations. It allows you to adjust two key settings:
- Threads per core to control whether Simultaneous Multi-Threading (SMT), also known as Hyper-Threading, is enabled.
- Number of CPU cores to define how many physical cores are visible to the operating system.
At launch, Optimize CPU supported only Bring Your Own License (BYOL) workloads, and you had to configure it when the instance was first created. Changing it later required creating a new Amazon Machine Image (AMI) and redeployment.
By October 2024, AWS made things much easier. You could adjust CPU settings at any time if the instance were stopped. This reduced both downtime and operational hassle, though it still applied only to BYOL instances.
In October 2025, AWS expanded Optimize CPU to support License Included (LI) workloads, such as Windows Server and Microsoft SQL Server. This was a notable change because for the first time, you could fine-tune vCPU counts to cut software licensing costs across both BYOL and LI models.
How Optimize CPU Settings Differ by Instance Type
The table below illustrates the flexibility gap between the Intel and AMD latest generation instances:
Instance |
Default vCPUs |
Threads Per Core |
Valid CPU Core Options |
m8a.8xlarge (AMD) |
32 |
1 |
1, 2, 3, 4, 8, 12, 16, 20, 24, 28, 32 |
m8i.8xlarge (Intel) |
32 |
2 |
1–16 |
Intel’s finer control below 50% of total vCPUs is useful for small workloads, but it’s limiting if you need more than half the cores. The AMD architecture, by contrast, provides complete flexibility up to the full 32 cores, making it easier to hit target CPU counts like 24.
Setting threads per core to 1 disables SMT or Hyper-Threading. The default of 2 threads per core is typical for Intel and older AMD instances (like the 6a). Note that AWS’s newer AMD-powered instances, M7a and M8a, don’t have SMT enabled. This might sound restrictive, but it’s actually an advantage because it ensures each vCPU corresponds directly to a full physical core, delivering more predictable performance.
Number of CPU Cores: Matching the Right Core Count
The number of CPU cores setting determines how many physical cores are visible to the OS. For AMD instances, where each vCPU maps directly to a core, this number equals the number of vCPUs. Intel’s Hyper-Threading complicates things slightly. Each core provides two vCPUs, so achieving a specific vCPU count often means disabling SMT or choosing a larger instance size.
Let’s consider an example. Suppose you need an LI Microsoft Windows Server instance in us-east-2 with 128 GiB RAM and 24 vCPUs, running on shared tenancy.
- AMD Option: An m8a.8xlarge works perfectly. Since SMT is disabled, setting the number of CPU cores to 24 yields exactly 24 vCPUs. The older m7a.8xlarge would also fit.
- Intel Option: Hyper-Threading complicates matters. The m8i.8xlarge maxes out at 16 optimized vCPUs, so to reach 24, you must move up to m8i.12xlarge and disable Hyper-Threading (Threads per core=1). This exposes 24 vCPUs, meeting the requirement but at a higher cost.
Here’s the cost comparison (Compute Only):
- AMD m8a.8xlarge: $1.947520/hour
- Intel m8i.12xlarge: $2.540160/hour
That’s a savings of $0.592640/hour, or roughly $5,191 per year with AMD, even before factoring in licensing. When including Optimize CPU-based licensing savings (~$3,223.68 per year), the AMD instance offers a clear cost advantage.
Remember: this scenario assumes similar performance between 24 vCPUs on each instance type, but real-world performance depends on your actual workload. Proper optimization involves testing to match performance, not just specifications. For example, benchmark data shows that the latest AMD instances often outperform comparable Intel generations by double-digit percentages in SQL workloads. You should always verify vendor guidance before adjusting your CPU settings for licensed applications.
The Nuances of Licensing and Billing
While Optimize CPU is powerful, it has a few quirks you should keep in mind:
- Minimum vCPUs: Instances launched from Windows and SQL Server must have at least four vCPUs. Even if you configure fewer, AWS will use default billing.
- Reserved Instances (RIs): RI discounts don’t apply to LI Windows or SQL Server instances when Optimize CPU is enabled. AWS recommends using Savings Plans for more flexibility.
- Enterprise Discount Program (EDP): EDP discounts apply differently depending on your commitment model. With RIs, discounts apply to both compute and license costs. With Savings Plans, they apply only to compute (Linux-equivalent pricing).
Why Reducing vCPU Count Matters
If the infrastructure cost doesn’t change, why bother reducing the number of vCPUs? The answer lies in software licensing. LI Microsoft products such as Windows Server and SQL Server are billed per vCPU. Reducing the vCPU count doesn’t change the EC2 compute rate but directly cuts your licensing costs, sometimes by thousands of dollars annually.
Consider the m8a.8xlarge example again. In us-east-2, LI Windows Server pricing is about 56% higher than Linux for the same instance. Almost half the total cost is licensing. By estimating the difference between Linux and Windows pricing, you can approximate the license cost at about $0.046 per vCPU per hour or roughly $403 per vCPU per year. For a 32-vCPU instance, disabling 12 cores via Optimize CPU saves around $4,836 in licensing costs annually, without changing your infrastructure footprint.
Here are the numbers to look at (you can find AWS’s license pricing details here):
Instance type / OS |
Cost/hour (us-east-2) |
License Cost |
License Cost / vCPU |
Percentage of Instance Cost |
m8a.4xlarge Linux |
$0.973760 |
N/A |
N/A |
N/A |
m8a.8xlarge Linux |
$1.947520 |
N/A |
N/A |
N/A |
m8a.4xlarge MS Windows Server |
$1.709760 |
$0.7360 |
$0.046 |
43.0470% |
m8a.8xlarge MS Windows Server |
$3.419520 |
$1.4720 |
$0.046 |
43.0470% |
A Secret Pro Tip: Disable Threads First
When tuning CPU settings, always disable threads before cores. Threaded vCPUs (SMT) often deliver 15–30% less performance than dedicated physical cores. Disabling Hyper-Threading first preserves the strongest possible performance from the remaining vCPUs.
Smarter Sizing with AMD and Optimize CPU
Now that AWS’s Optimize CPU feature supports LI Windows Server and SQL Server instances, you have the potential to achieve significant cost savings with proper instance sizing. When combined with the superior performance-per-dollar of the latest AMD EPYC™-powered EC2 instances (M7a and M8a), it provides a high-fidelity approach for right-sizing your cloud environment. When pairing AMD powered instances with Optimize CPU, you can fine-tune performance, potentially minimize licensing costs, and maximize ROI, all without sacrificing compute efficiency.
Don’t Go It Alone
Whether you’re modernizing, downsizing, or simply looking for cost efficiency, AMD offers tools and expertise to guide the way. The AMD EPYC Advisory Suite can help you identify the right EC2 instance for your workload, and AMD works closely with AWS partners to deliver tailored solutions. Many organizations combine multiple strategies across different workloads to achieve maximum ROI. If you need help deciding what best fits your architecture, AMD and its partners are ready to assist. To offer feedback on this post or to suggest future topics, feel free to reach out at AWS@AMD.com.