Cooper Lake Xeon


Intel®’s 3rd Gen Xeon® SP Family of Server CPUs with DL Boost

Intel® Xeon® Scalable Processors

The new Intel® Xeon® 3rd generation of processors for servers provide the next leap forward in data center agility and scalability, raising the bar of platform convergence and capabilities across compute, storage, memory, network, and security to meet the ever growing demands of High Performance Computing and AI.

Intel Cooper Lake Capable Server Blades

The New Intel Cooper Lake processor is purpose built and performance-optimized for use in High Performance Computing and Artificial Intelligence applications. Contact an Aspen Systems Engineer for Assistance.

Gigabyte R292-4S0 Rack-mount Server
(click to enlarge and view specs)

Shop Now

Gigabyte R292-4S0 Rack-mount Server

  • 2U Depth 33.78″
  • Quad Socket 3rd Gen Intel® Xeon® Scalable Series Processors (up to 250W TDP)
  • 48x DDR4 DIMM slots
  • Intel® Optane Persistent Memory 200 Series
  • 10x 2.5″ U.2 NVMe/SAS3/SATA3 Hot-Swap Drive Bays
  • 6x PCI-E 3.0 x16 (FHFL)
  • Supports up to 4x FHFL double-slot GPUs
  • Dual-Port 10GbE LAN
  • 3200W redundant power supply

 Chat now, Call toll free 1-800-992-9242, Email an Aspen Systems engineer to order, or Request a quote

Supermicro SuperServer SYS-240P-TNRT
(click to enlarge and view specs)

Shop Now

Supermicro SuperServer SYS-240P-TNRT

  • 2U Depth 31.6″
  • Quad Socket 3rd Gen Intel® Xeon® Scalable Series Processors (up to 250W TDP)
  • 48x DDR4 DIMM slots
  • Intel® Optane Persistent Memory 200 Series
  • 24x 2.5″ U.2 NVMe/SAS3/SATA3 Hot-Swap Drive Bays
  • 2x PCI-E 3.0 x16 (FHFL)
  • 2x PCI-E 3.0 x16 (LP)
  • 6x PCI-E 3.0 x8 (LP)
  • Dual-Port 10GbE LAN
  • 2000W redundant power supply

 Chat now, Call toll free 1-800-992-9242, Email an Aspen Systems engineer to order, or Request a quote

Gigabyte R292-4S1 Rack-mount Server
(click to enlarge and view specs)

Shop Now

Gigabyte R292-4S1 Rack-mount Server

  • 2U Depth 33.78″
  • Quad Socket 3rd Gen Intel® Xeon® Scalable Series Processors (up to 250W TDP)
  • 48x DDR4 DIMM slots
  • Intel® Optane Persistent Memory 200 Series
  • 10x 2.5″ U.2 NVMe/SAS3/SATA3 Hot-Swap Drive Bays
  • 8x PCI-E 3.0 x16 (FHFL)
  • Supports up to 8x Add-on cards
  • Dual-Port 10GbE LAN
  • 3200W redundant power supply

 Chat now, Call toll free 1-800-992-9242, Email an Aspen Systems engineer to order, or Request a quote


Accelerating the Development and Use of Artificial Intelligence



Xeon Platinum 8300 processors provide considerable more performance for artificial intelligence and analytics workloads than previous generations:


  • 1.87x more AI Inference for image classification
  • 1.7x more AI training for natural language processing
  • 1.9x more AI inference for for natural language processing

INTEL® XEON® 3rd gen processors’ upgrades

The Platinum 8300 processors are designed specifically for advanced analytics, artificial intelligence, and high density infrastructure with up to 28 cores per Intel® Scalable processor, 6 memory channels per processor at up to 3200 MT/s (1 DPC) and feature Intel® Deep Learning Boost with BFLOAT16 and VNNI for enhanced AI Inference acceleration and performance.

Intel® Speed Select Technology

Intel® SST enables the optimization of processing resources to enhance workload performance, increase utilization, and optimize TCO.

Intel® Optane persistent memory 200 series

Delivers an average of 25% higher memory bandwidth compared to first generation.

Intel® Infrastructure Management Technologies

Intel® RDT and VT-x enables greater data center resource efficiency, utilization and security.

Application Device Queries (ADQ)

Offers application specific, uncontended, smooth flowing of traffic with no sharing of traffic by other applications on these queues.

Intel® Stratix 10 NX FPGA

Delivers exceptional performance for workloads such as natural language processing and financial fraud detection.

 

Intel Deep Learning Boost


Today’s scientific discoveries are fueled by innovative algorithms, new sources and volumes of data, and advances in compute and storage. Machine learning, deep learning, and AI converge the capabilities of massive compute with the flood of data to drive next-generation applications, such as autonomous systems and self-driving vehicles.

3rd Gen Intel® Xeon® Scalable processors are built specifically for the flexibility to run complex AI workloads on the same hardware as your existing workloads. Enhanced Intel Deep Learning Boost, with the industry’s first x86 support of Brain Floating Point 16-bit (blfoat16) numeric format and Vector Neural Network Instructions (VNNI), brings enhanced artificial intelligence inference and training performance, with up to 1.93X more AI training performance and 1.87X more AI inference performance for image classification vs. the prior generation. New bfloat16 processing support benefits AI training workloads in healthcare, financial services, and retail where throughput and accuracy are key criteria, like vision, natural language processing (NLP), and reinforcement learning (RL).

Intel Deep Learning Boost with bfloat16 delivers 1.7X more AI training performance for natural language processing vs. the prior generation. 3rd Gen Intel® Xeon® Scalable processors help to deliver AI readiness across the data center, to the edge and back.

Intel® Xeon Scalable Processor Optimization

BFLOAT16
BFLOAT16 has been designed specifically for Machine Learning and Deep Learning applications. BFLOAT16 enables wider, deeper and larger training leading to larger accuracy and performance improvements for large models.

Intel® Xeon Scalable Processor Acceleration

VNNI
‘Vector Neural Network Instructions’ is an x86 extension instruction set designed specifically for convolutional neural networks and INT8 inference performance improvements. VNNI is only available in the latest Intel® CPUs.

Intel® Xeon Scalable Processor Networking

Optane 200 Series
The Intel® Optane persistent memory 200 series, is a groundbreaking, workload optimized technology that is delivered with all new, 3rd Gen Xeon scalable processors and delivers 25% higher memory bandwidth.

Brain FLoating-Point Format with 16 Bits (BFLOAT16)

BFLOAT16 is a new number encoding format designed specifically for accelerating AI workloads. BFLOAT16 allows for more intensive computations by truncating the number of total bits used from 32 to 16. The previously used Floating Point 32 (FP32) would provide high precision with 23 bits in the Fraction/Mantissa field, 8 bits in the exponent field, and a single bit for the sign bit.


Floating Point 32

Floating Point 32

BFLOAT16 maintains the single bit for the sign and the 8 bits for the exponent field while reducing the number of bits in the Fraction/Mantissa field from 23 to 7.


BFLOAT16

BFLOAT16

Many AI functions do not require the high precision of FP32 with 23 bits in the Mantissa field. By reducing this to 7 bits, BFLOAT16 allows for numbers as large as FP32 while sacrificing precision for speed thus making BFLAOT16 perfect for high intensity AI workloads. Twice the throughput per cycle can be achieved with BFLOAT16 rather than FP32.

VNNI – Vector Neural Network Instruction – Extending Intel
AVX-512 to Accelerate Inference

Intel VNNI combines three instructions from Intel AVX-512 into a single instruction to optimize compute resources, improve cache utilization and prevents potential bandwidth bottlenecks. VNNI allows for up to 11 times the Deep Learning throughput compared to AVX-512 only of previous Xeon generations. VNNI accelerates low precision integer operations improving AI and Deep Learning for image classification, object detection, speech recognition and language translation.


VNNI Workloads

Optane™ Series 200 Persistent Memory
A Perfect Match for Cooper Lake Processors

Optane Series 200

One of the greatest benefits of the Cooper Lake architecture is its ability to utilize Optane Persistent Memory Series 200, offering up to 4.5 memory per socket. Optane improves upon many aspects including performance, cost savings and productivity by addressing bandwidth bottlenecks, reducing TCO, and increasing memory capacity.

 

Cooper Lake Scalable Processor (SP) family processors

Click table heading to sort by that column.

SKU Level Cores Threads Cache TDP Base Freq. Turbo Freq.
6328H Xeon Gold 16 32 22 MB 165 W 2.80 GHz 4.30 GHz
8380HL Xeon Platinum 28 56 38.5 MB 250 W 2.90 GHz 4.30 GHz
8360H Xeon Platinum 24 48 33 MB 225 W 3.00 GHz 4.20 GHz
6328HL Xeon Gold 16 32 22 MB 165 W 2.80 GHz 4.30 GHz
8360HL Xeon Platinum 24 48 33 MB 85 W 3.00 GHz 4.20 GHz
5320H Xeon Gold 20 40 27.5 MB 150 W 2.40 GHz 4.20 GHz
6330H Xeon Gold 24 48 33 MB 150 W 2.00 GHz 3.70 GHz
5318H Xeon Gold 18 36 24.75 MB 150 W 2.5 GHz 3.8 GHz
8353H Xeon Platinum 18 36 24.75 MB 150 W 2.50 GHz 3.80 GHz
8380H Xeon Platinum 28 56 38.5 MB 250 W 2.90 GHz 4.30 GHz
8354H Xeon Platinum 18 36 24.75 MB 205 W 3.1 GHz 4.30 GHz
6348H Xeon Gold 24 48 33 MB 165 W 2.30 GHz 4.20 GHz
8376H Xeon Platinum 28 56 38.5 MB 205 W 2.60 GHz 4.30 GHz
8356H Xeon Platinum 8 16 35.75 MB 190 W 3.90 GHz 4.40 GHz
8376HL Xeon Platinum 28 56 38.5 MB 205 W 2.60 GHz 4.30 GHz
 

Cooper Lake (8300 series) processors:

Cooper Lake

A New Architecture Designed to Accelerate HPC and AI Training and Inference

Performance-optimized multiple-chip package with 6 high speed UPI interconnects. Supporting both quad-socket & 8 socket configurations, with up to 28 cores, 56 threads and 6 channels per socket. Architected for a range of demanding workloads including HPC and AI. Equipped with Intel’s Deep Learning Boost consisting of BFLOAT16 and VNNI instruction sets and comes with Intel Optane Series 200 persistent memory. Cooper Lake processors are the next generation of the Intel Xeon processor family, ready to facilitate all of your HPC and AI needs. Contact an Aspen Systems Engineer for facilities requirements such as power and cooling.

Click table heading to sort

Name # cores Base Freq. Turbo Freq. L3 Cache UPI Links Power
8376HL282.60 GHz4.30 GHz38.5 MB6205W
8356H83.90 GHz4.40 GHz35.75 MB6190W
8376H28 2.60 GHz4.30 GHz38.5 MB6205W
8354H183.10 GHz4.30 GHz24.75 MB6205W
8380H282.90 GHz4.30 GHz38.5 MB6205W
8358H182.50 GHz3.80 GHz24.75 MB6150W
8360HL24 3.00 GHz4.20 GHz32 MB6225W
8360H243.00 GHz4.20 GHz33 MB6225W
8380HL282.90 GHz4.30 GHz38.5 MB6250W
 

Aspen Systems is an Intel Certified Platinum Technology Provider

Aspen Systems Custom HPC Racks Aspen Systems Custom HPC Racks

Aspen Systems is an Intel Platinum ProviderAspen Systems provides more than just advance technology — we provide elegant solutions. There are many advantages with working with us. We guide you through the process of updating your IT infrastructure, and our engineers can deploy new services, storage, networking, and compute power that scales fast. Our team of industry leaders delivers a full spectrum of custom solutions, helping you create a system from the ground up that gives you quick access to your data, even if dealing with many data silos. At Aspen, we work with you directly, on-site if needed, to put in place the perfect IT modernization strategy for your organization. Our team of industry experts will show you how to reduce your maintenance costs and achieve better total cost of ownership.

Our trained engineers will help determine facility requirements based on the chosen solution, everything from ensuring power and cooling requirements are met, to installing beautiful network cabling and ensuring easy access to mission-critical equipment. If your requirements demand it, our engineers will ensure your data is encrypted at rest, in-flight, and in memory to deliver maximum security. Whatever the case, Aspen Systems will help you navigate all of your options and guide you to an optimized solution, and our experts can be available through all phases of deployment including planning, architecture, manufacturing, installation, and configuration – a single, all-encompassing partner in HPC for your data center.