The new Intel® Xeon® 3rd generation of processors for servers provide the next leap forward in data center agility and scalability, raising the bar of platform convergence and capabilities across compute, storage, memory, network, and security to meet the ever growing demands of High Performance Computing and AI.
The New Intel Cooper Lake processor is purpose built and performance-optimized for use in High Performance Computing and Artificial Intelligence applications. Contact an Aspen Systems Engineer for Assistance.
The Platinum 8300 processors are designed specifically for advanced analytics, artificial intelligence, and high density infrastructure with up to 28 cores per Intel® Scalable processor, 6 memory channels per processor at up to 3200 MT/s (1 DPC) and feature Intel® Deep Learning Boost with BFLOAT16 and VNNI for enhanced AI Inference acceleration and performance.
Intel® SST enables the optimization of processing resources to enhance workload performance, increase utilization, and optimize TCO.
Delivers an average of 25% higher memory bandwidth compared to first generation.
Intel® RDT and VT-x enables greater data center resource efficiency, utilization and security.
Offers application specific, uncontended, smooth flowing of traffic with no sharing of traffic by other applications on these queues.
Delivers exceptional performance for workloads such as natural language processing and financial fraud detection.
Today’s scientific discoveries are fueled by innovative algorithms, new sources and volumes of data, and advances in compute and storage. Machine learning, deep learning, and AI converge the capabilities of massive compute with the flood of data to drive next-generation applications, such as autonomous systems and self-driving vehicles.
3rd Gen Intel® Xeon® Scalable processors are built specifically for the flexibility to run complex AI workloads on the same hardware as your existing workloads. Enhanced Intel Deep Learning Boost, with the industry’s first x86 support of Brain Floating Point 16-bit (blfoat16) numeric format and Vector Neural Network Instructions (VNNI), brings enhanced artificial intelligence inference and training performance, with up to 1.93X more AI training performance and 1.87X more AI inference performance for image classification vs. the prior generation. New bfloat16 processing support benefits AI training workloads in healthcare, financial services, and retail where throughput and accuracy are key criteria, like vision, natural language processing (NLP), and reinforcement learning (RL).
Intel Deep Learning Boost with bfloat16 delivers 1.7X more AI training performance for natural language processing vs. the prior generation. 3rd Gen Intel® Xeon® Scalable processors help to deliver AI readiness across the data center, to the edge and back.
BFLOAT16
BFLOAT16 has been designed specifically for Machine Learning and Deep Learning applications. BFLOAT16 enables wider, deeper and larger training leading to larger accuracy and performance improvements for large models.
VNNI
‘Vector Neural Network Instructions’ is an x86 extension instruction set designed specifically for convolutional neural networks and INT8 inference performance improvements. VNNI is only available in the latest Intel® CPUs.
Optane 200 Series
The Intel® Optane persistent memory 200 series, is a groundbreaking, workload optimized technology that is delivered with all new, 3rd Gen Xeon scalable processors and delivers 25% higher memory bandwidth.
BFLOAT16 is a new number encoding format designed specifically for accelerating AI workloads. BFLOAT16 allows for more intensive computations by truncating the number of total bits used from 32 to 16. The previously used Floating Point 32 (FP32) would provide high precision with 23 bits in the Fraction/Mantissa field, 8 bits in the exponent field, and a single bit for the sign bit.
BFLOAT16 maintains the single bit for the sign and the 8 bits for the exponent field while reducing the number of bits in the Fraction/Mantissa field from 23 to 7.
Many AI functions do not require the high precision of FP32 with 23 bits in the Mantissa field. By reducing this to 7 bits, BFLOAT16 allows for numbers as large as FP32 while sacrificing precision for speed thus making BFLAOT16 perfect for high intensity AI workloads. Twice the throughput per cycle can be achieved with BFLOAT16 rather than FP32.
Intel VNNI combines three instructions from Intel AVX-512 into a single instruction to optimize compute resources, improve cache utilization and prevents potential bandwidth bottlenecks. VNNI allows for up to 11 times the Deep Learning throughput compared to AVX-512 only of previous Xeon generations. VNNI accelerates low precision integer operations improving AI and Deep Learning for image classification, object detection, speech recognition and language translation.
One of the greatest benefits of the Cooper Lake architecture is its ability to utilize Optane Persistent Memory Series 200, offering up to 4.5 memory per socket. Optane improves upon many aspects including performance, cost savings and productivity by addressing bandwidth bottlenecks, reducing TCO, and increasing memory capacity.
Click table heading to sort by that column.
SKU | Level | Cores | Threads | Cache | TDP | Base Freq. | Turbo Freq. |
---|---|---|---|---|---|---|---|
6328H | Xeon Gold | 16 | 32 | 22 MB | 165 W | 2.80 GHz | 4.30 GHz |
8380HL | Xeon Platinum | 28 | 56 | 38.5 MB | 250 W | 2.90 GHz | 4.30 GHz |
8360H | Xeon Platinum | 24 | 48 | 33 MB | 225 W | 3.00 GHz | 4.20 GHz |
6328HL | Xeon Gold | 16 | 32 | 22 MB | 165 W | 2.80 GHz | 4.30 GHz |
8360HL | Xeon Platinum | 24 | 48 | 33 MB | 85 W | 3.00 GHz | 4.20 GHz |
5320H | Xeon Gold | 20 | 40 | 27.5 MB | 150 W | 2.40 GHz | 4.20 GHz |
6330H | Xeon Gold | 24 | 48 | 33 MB | 150 W | 2.00 GHz | 3.70 GHz |
5318H | Xeon Gold | 18 | 36 | 24.75 MB | 150 W | 2.5 GHz | 3.8 GHz |
8353H | Xeon Platinum | 18 | 36 | 24.75 MB | 150 W | 2.50 GHz | 3.80 GHz |
8380H | Xeon Platinum | 28 | 56 | 38.5 MB | 250 W | 2.90 GHz | 4.30 GHz |
8354H | Xeon Platinum | 18 | 36 | 24.75 MB | 205 W | 3.1 GHz | 4.30 GHz |
6348H | Xeon Gold | 24 | 48 | 33 MB | 165 W | 2.30 GHz | 4.20 GHz |
8376H | Xeon Platinum | 28 | 56 | 38.5 MB | 205 W | 2.60 GHz | 4.30 GHz |
8356H | Xeon Platinum | 8 | 16 | 35.75 MB | 190 W | 3.90 GHz | 4.40 GHz |
8376HL | Xeon Platinum | 28 | 56 | 38.5 MB | 205 W | 2.60 GHz | 4.30 GHz |
Performance-optimized multiple-chip package with 6 high speed UPI interconnects. Supporting both quad-socket & 8 socket configurations, with up to 28 cores, 56 threads and 6 channels per socket. Architected for a range of demanding workloads including HPC and AI. Equipped with Intel’s Deep Learning Boost consisting of BFLOAT16 and VNNI instruction sets and comes with Intel Optane Series 200 persistent memory. Cooper Lake processors are the next generation of the Intel Xeon processor family, ready to facilitate all of your HPC and AI needs. Contact an Aspen Systems Engineer for facilities requirements such as power and cooling.
Click table heading to sort
Name | # cores | Base Freq. | Turbo Freq. | L3 Cache | UPI Links | Power |
---|---|---|---|---|---|---|
8376HL | 28 | 2.60 GHz | 4.30 GHz | 38.5 MB | 6 | 205W |
8356H | 8 | 3.90 GHz | 4.40 GHz | 35.75 MB | 6 | 190W |
8376H | 28 | 2.60 GHz | 4.30 GHz | 38.5 MB | 6 | 205W |
8354H | 18 | 3.10 GHz | 4.30 GHz | 24.75 MB | 6 | 205W |
8380H | 28 | 2.90 GHz | 4.30 GHz | 38.5 MB | 6 | 205W |
8358H | 18 | 2.50 GHz | 3.80 GHz | 24.75 MB | 6 | 150W |
8360HL | 24 | 3.00 GHz | 4.20 GHz | 32 MB | 6 | 225W |
8360H | 24 | 3.00 GHz | 4.20 GHz | 33 MB | 6 | 225W |
8380HL | 28 | 2.90 GHz | 4.30 GHz | 38.5 MB | 6 | 250W |
Aspen Systems provides more than just advance technology — we provide elegant solutions. There are many advantages with working with us. We guide you through the process of updating your IT infrastructure, and our engineers can deploy new services, storage, networking, and compute power that scales fast. Our team of industry leaders delivers a full spectrum of custom solutions, helping you create a system from the ground up that gives you quick access to your data, even if dealing with many data silos. At Aspen, we work with you directly, on-site if needed, to put in place the perfect IT modernization strategy for your organization. Our team of industry experts will show you how to reduce your maintenance costs and achieve better total cost of ownership.
Our trained engineers will help determine facility requirements based on the chosen solution, everything from ensuring power and cooling requirements are met, to installing beautiful network cabling and ensuring easy access to mission-critical equipment. If your requirements demand it, our engineers will ensure your data is encrypted at rest, in-flight, and in memory to deliver maximum security. Whatever the case, Aspen Systems will help you navigate all of your options and guide you to an optimized solution, and our experts can be available through all phases of deployment including planning, architecture, manufacturing, installation, and configuration – a single, all-encompassing partner in HPC for your data center.