Dual Socket Performance in a Single Socket

AMD’s new EPYC (Naples) 7000 series processors, can meet or exceed the computing power you need for a lower cost, with high core clock, high counts and low wattage power draws, AMD EPYC processors offer a competitive edge when it comes to performance. Based on AMD testing of the 7000 series, when compared to last generations 6300 series processors, offer up to a 37% performance uplift and support 2666MHz of DDR4 RAM and PCI-e 3.0. Plus. For high core count CPUs and high memory count for a lower price, consider going with an AMD solution from Aspen Systems.

AMD Epyc Processor


AMD EPYC (Naples)

Bringing hardware innovation back to the datacenter

The AMD EPYC system-on-chip delivers real innovation to better address the needs of existing and emerging data center workloads. With industry leading Core Count, Memory Bandwidth and unprecedented I/O, EPYC sets a new standard of performance, scalability and balance for the modern Datacenter. Aspen Systems is the Processing Experts, contact us and design your system today.

AMD EPYC (Naples) vs AMD Opteron

AMD spent five years to develop and release their new server processors. The new EPYC processor, based on the Zen architecture, has a set of new features as compared to its predecessor, the Opteron. Here are some of the new key features:

CPU EPYC Opteron
Size 14nm 32nm
Cores 8-32 cores / 16-64 threads 1-16 cores/threads
Clock rate (Base) 2.0GHz – 2.4GHz 1.4GHz – 3.3GHz
Clock rate (Turbo/Boost) Up to 3.2GHz Up to 3.8GHz
Cache (L2) 8MB Up to 16MB
Cache (L3) 32 – 64MB per die Up to 2 x 8MB per die
Memory Channels Eight at 2400-2666MHz DDR4 Four at 1333-1600Mhz DDR3
DIMMs per channel Up to 2 Up to 3
TDP Up to 170W UP to 140W
Multi-processing Up to 1 or 2 Up to 4
CPU Interconnect Infinity Fabric HyperTransport

While these features are an improvement, there are some features that we long for in the Opteron processors such as being able to use them on a multi-processor (MP, or 4+ CPU) board, and a higher clock rate. The greatest improvements are the L3 cache, the memory bandwidth, having twice as many memory channels, and the Infinity Fabric. It’s easy to look at a table like the one above and say that two generations of CPUs look rather comparable. The biggest test is how the new EPYC systems run your applications. Interested in finding out? The Aspen team is ready to help.

AMD EPYC Standalone


The AMD EPYC (Naples) 7601 system on chip (SoC) racespast the best Intel Xeon E5 v4 processor CPU by upto 25% in integer and up to 59% in floating point performance—setting a total of four new world records for standalone CPUs on the SPEC CPU 2006 benchmarks.

Download PDF

AMD EPYC x86 System


AMD EPYC (Naples) enables no-compromise 1-socket servers with up to 32 cores, 8 memory channels and 128 PCIe® 3.0 lanes enabling capabilities and performance previously available only in 2-socket architectures.

Download PDF



Industry’s First Embedded X86 Silicon-level Data Security

A dedicated security processor in the EPYC SoC minimizes potential attack surfaces and protects your software and data as it is booted, as it runs, and as it moves from server to server.

Download PDF


AMD EPYC 64 Cores

With up to 64 physical cores in a dual processor server, you get over 12% more CPU cores than the latest Intel Xeon processors. Additionally, you get up to 25% better integer performance on the specint_rate2006 benchmark for 2-socket servers as compared to the top of the line Intel Xeon Processor, E5-2699A v4. Need floating point? AMD CPUs can perform up to 59% better on floating point performance on the specfp_rate2006 benchmark for 2-socket servers compared to the E5-2699A v4 processors. Need more threads? With a dual-socket AMD system, each core can be split to run two threads, and you can get up to 128 total compute threads.

The Infinity Fabric is AMD’s new socket-to-socket interconnect. This is the successor to what was AMD’s HyperTransport, only the new interconnect has a better software-defined interconnect. This allows for better multi-die scalability, and socket-to-socket connectivity. What’s this all mean and how does this differ from Intel’s CPUs? In summary, while Intel tries to put everything on chip and use their QPI as infrequently as possible, AMD uses their Infinity Fabric for everything they can to get a lower cost, and more highly connected way of communicating across components in a server.

AMD EPYC Infinity Fabric

AMD EPYC 128 Compute Threads

If you’re looking for a lot of PCIe lanes, you can use a single socket AMD EPYC system to get a massive 128 lanes of PCIe 3.0. This is perfect for GPU computing where you can add eight (8) x16 PCIe GPU cards onto a single CPU without the need for a PCIe switch or expander. If you need a massive NVMe server, this is also a good way to go. One rule to remember is that adding a second CPU does not allow you to have 256 PCIe lanes, but the current Infinity Fabric architecture only allows for a total of 128 lanes, no matter how many CPUs you have on the board.

While Intel’s current servers only have four or six memory channels per socket, AMD’s EPYC CPUs come with eight memory channels per socket. You can also load up two memory DIMMs per channel for a total of 32 memory DIMMs in a 2-socket server. With up to 2666Mhz performance, you can easily get up to 2TB of RAM in a dual-socket server by using 64GB DIMMs, by using the 128GB DIMMs, you can get up to a massive 4TB of RAM. Not only do you get eight memory channels per socket, most of the EPYC CPUs come with 64MB of L3 cache. This gives AMD a better balance of resources to make your real-world workloads perform better: more cores, more memory, more memory bandwidth, and more I/O capacity.

AMD EPYC 8 Channels

AMD EPYC (Naples) Capable Servers

Supermicro 1U Ultra Server 1123US-TR4

1U Ultra Server

Up to 10 Hot-Swap SAS or SATA 2.5″ Hard Drive Bays (2x optional NVMe)

Shop 1U 1123US-TR4

Supermicro 2U Ultra Server 2023US-TR4

2U Ultra Server

Dual Socket AMD EPYC 7000 Series Processors

Shop 2U 2023US-TR4

AMD EPYC (Naples) Specifications

Model Number Cores Threads Base Freq
All Core Boost Freq
Max. Boost Freq.
L3 Cache
DDR Channels Max DDR Freq
2-Socket Theoretical Memory Bandwidth
PCIe Socket Pkg Workload Affinity
7601 32 64 2.20 2.70 3.20 180 64 8 2666 341 x128 2P/1P SP3 • DMBS and Analytics
• Capacity HPC
7551 32 64 2.00 2.55 3.00 180 64 8 2666 341 x128 2P/1P SP3 • VM Dense
• DMBS and Analytics
• Capacity HPC
7551P 32 64 2.00 2.55 3.00 180 64 8 2666 341 x128 1P SP3
7501 32 64 2.00 2.60 3.00 155/170 64 8 2400/2666 307/341 x128 2P/1P SP3 • VM Dense
• DMBS and Analytics
• Web Serving
7451 24 48 2.30 2.90 3.20 180 64 8 2666 341 x128 2P/1P SP3 • General Purpose
7401 24 48 2.00 2.80 3.00 155/170 64 8 2400/2666 307/341 x128 2P/1P SP3 • General Purpose
• GPU.FPGA Accelerated
• Storage
7401P 24 48 2.00 2.80 3.00 155/170 64 8 2400/2666 307/341 x128 1P SP3
7351 16 32 2.40 2.90 2.90 155/170 64 8 2400/2666 307/341 x128 2P/1P SP3 • General Purpose
• GPU/FPGA Accelerated
• Storage
7351P 16 32 2.40 2.90 2.90 155/170 64 8 2400/2666 307/341 x128 1P SP3
7301 16 32 2.20 2.70 2.70 155/170 64 8 2400/2666 307/341 x128 2P/1P SP3 • General Purpose
• License Cost Optimized
7281 16 32 2.10 2.70 2.70 155/170 32 8 2400/2666 307/341 x128 2P/1P SP3 • General Purpose
• License Cost Optimized
7251 8 16 2.10 2.90 2.90 120 32 8 2400 307 x128 2P/1P SP3 • License Cost Optimized