NVIDIA® Tesla GPUs

NVIDIA GPU – One Platform. Unlimited Data Center Acceleration.

NVIDIA GPU Computing Preferred Solutions Provider Logo

NVIDIA GPU – Accelerating scientific discovery, visualizing big data for insights, and providing smart services to consumers are everyday challenges for researchers and engineers. Solving these challenges takes increasingly complex and precise simulations, the processing of tremendous amounts of data, or training sophisticated deep learning networks. These workloads also require accelerating data centers to meet the growing demand for exponential computing.

NVIDIA Tesla is the world’s leading platform for accelerated data centers, deployed by some of the world’s largest super-computing centers and enterprises. It combines GPU accelerators, accelerated computing systems, interconnect technologies, development tools, GPU applications and Compilers, like PGI to enable faster scientific discoveries and big data insights.

At the heart of the Tesla platform are the massively parallel GPU accelerators that provide dramatically higher throughput for compute‑intensive workloads—without increasing the power budget and physical footprint of data centers.

The Exponential Growth of GPU Computing

For more than two decades, NVIDIA has pioneered visual computing, the art and science of computer graphics. With a singular focus on this field, NVIDIA GPUs offers specialized platforms for the gaming, professional visualization, data center, GPU server and automotive markets. NVIDIA’s work is at the center of the most consequential mega-trends in GPU cluster technology — virtual reality, artificial intelligence and self-driving cars.

GPU servers have become an essential part in the computational research world. From bioinformatics to weather modeling, GPUs have offered over 70x speed up on researcher’s code. With hundreds of applications already accelerated by these cards, check to see if your favorite applications are on the GPU applications list.

NVIDIA Tesla V100

The Most Advanced Data Center GPU Ever Built.

NVIDIA® Tesla® V100 is the world’s most advanced data center GPU ever built to accelerate AI, HPC, and graphics. Powered by NVIDIA Volta™, the latest GPU architecture, Tesla V100 offers the performance of 100 CPUs in a single GPU—enabling data scientists, researchers, and engineers to tackle challenges that were once impossible.

Read more about NVIDIA Tesla V100 Volta GPU

NVIDIA Volta V100

  PERFORMANCE with
NVIDIA GPU Boost
Interconnect Bandwidth   PERFORMANCE with
NVIDIA GPU Boost
Form Factor Single-Precision Double-Precision Deep Learning Bi-Directional Power Capacity Bandwidth
SXM2 15.7 TFLOPS 7.8 TFLOPS 125 TFLOPS 300 GB/s 300 WATTS 16 or 32 GB HBM2 900 GB/s
PCIe 14 TFLOPS 7 TFLOPS 112 TFLOPS 32 GB/s 250 WATTS 16 or 32 GB HBM2 900 GB/s

Three Reasons to Upgrade to the NVIDIA Tesla V100

NVIDIA Volta V100

1Be Prepared for the AI Revolution

NVIDIA Tesla V100 is the computational engine driving the AI revolution and enabling HPC breakthroughs. For example, researchers at the University of Florida and the University of North Carolina leveraged GPU deep learning to develop ANAKIN-ME (ANI) to reproduce molecular energy surfaces at extremely high (DFT) accuracy and 1-10/millionths of the cost of current computational methods.

Boost Data Center Productivity V100

2Boost Data Center Productivity & Throughput

Data center managers all face the same challenge: how to meet the demand for computing resources that often exceed available cycles in the system.

NVIDIA Tesla V100 dramatically boosts the throughput of your data center with fewer nodes, completing more jobs and improving data center efficiency.

A single server node with V100 GPUs can replace up to 50 CPU nodes. For example, for HOOMD-Blue, a single node with four V100’s will do the work of 43 dual-socket CPU nodes while for MILC a single V100 node can replace 14 CPU nodes. With lower networking, power, and rack space overheads, accelerated nodes provide higher application throughput at substantially reduced costs.

GPU-Accelerated NVIDIA Volta V100

3Top Applications are GPU-Accelerated

Over 450 HPC applications are already GPU-optimized in a wide range of areas including quantum chemistry, molecular dynamics, climate and weather, and more.

In fact, an independent study by Intersect360 Research shows that 70% of the most popular HPC applications, including 10 of the top 10 have built-in support for GPUs.

With the most popular HPC applications and all deep learning frameworks GPU-accelerated, every HPC customer would see most of their data center workload benefit from GPU-accelerated computing.

Three Reasons to Upgrade to the NVIDIA Tesla V100

Understanding NVIDIA’s product line

Tesla? Volta? GeForce? Turing? Pascal? NVIDIA has a very diverse product line which some find challenging to navigate. While you should contact an expert to determine your specific needs, here is a simplified rundown of some of NVIDIA’s product line to help you understand some of the basics.

Tesla

Tesla GPUs are aimed at data center compute

The Tesla brand of products is geared to the Data Center. These are highly specialized cards for compute, and as such, are the GPU of choice for data centers and supercomputers. They do not have video outputs, for example, and often utilize passive cooling. If you are using GPUs for clusters or for compute, you want a Tesla card. As such, Aspen Systems recommends the Tesla series for HPC.

Quadro

Quadro GPUs are aimed at data science and advanced graphic applications

Quadro cards are usually more powerful than GeForce cards, but are similar, and in some cases have nearly identical specifications and core technology. However, you get something much more valuable whenever you purchase a card with the name “Quadro” on it — World-class support by NVIDIA. In mission-critical applications, the name “Quadro” can shorten downtime and get you up and running fast.

.

GeForce

GeForce GPUs are aimed at consumer market and gaming

You may have heard of GeForce cards before, or have seen them on the shelves at your local consumer electronics store. These cards are consumer-oriented cards, and are usually used for gaming applications and displays. Aspen Systems does not recommend GeForce GPUs for HPC, data science, artificial intelligence, machine learning, or any other compute-intensive applications, as they are not specialized for that purpose.

Architectures

The architecture refers to the generation of technology used in the card, and each new generation usually introduces something new to the mix.

Turing

(EG RTX 8000, RTX 6000, Titan RTX). Turning is the latest generation of NVIDIA technology that is focused on graphics. Turing architecture introduced the new RT cores that offer real-time ray-tracing directly through the hardware rather than the more cycle-hungry process of software ray-tracing.

Volta

(EG V100, GV100). Volta introduced the new Tensor cores into the mix. These cores are a huge leap forward for applications involving artificial intelligence and machine learning, as they are highly specialized for tasks associated with the “tensors” used in such applications and libraries, such as TensorFlow, and are designed from the ground up to handle these workloads.

Quadro RTX Series

While we recommend the V100 for the Data Center, some applications, especially in data science and visualization, can benefit from the technology that has gone into building the new RTX series of cards, especially the new Turing RT cores, which have ray-tracing capabilities built into the hardware itself. Since these cards also have CUDA and Tensor cores, they are perfect for data science and machine learning applications that require advanced graphic capabilities, supporting UP TO 4 8K Displays!

Quadro RTX 8000

RTX 8000

Quadro RTX 6000

RTX 6000

Read about the new RTX Data Science Workstations featuring these innovative new cards.

Titan RTX

Titan RTX Discount Program

The Titan RTX is a powerhouse for data science, and is a great value for those on a budget. The specifications for this card are very similar to the RTX 6000, for a fraction of the price. This is great for educational institutions looking to add value to courses in data science and machine learning. Aspen systems is offering a discount to qualified educational institutions until October 27th, 2019.

Read more…

Choose the right NVIDIA GPU for you

  V100 SXM2 V100 PCIe GV100 RTX 8000 RTX 6000 Titan RTX T4
Appearance NVIDIA GPU - NVIDIA Tesla V100 GPU Computing Accelerator NVLINK 32GB NVIDIA GPU - NVIDIA Tesla V100 GPU Computing Accelerator PCIe 16GB NVIDIA GPU - NVIDIA GV100 NVIDIA GPU - NVIDIA RTX 8000 NVIDIA GPU - NVIDIA RTX 6000 NVIDIA GPU - TITAN RTX NVIDIA Tesla T4 used for GPU Clusters and GPU Servers
Specifications
GPU Arcitecture Volta Volta Volta Turing Turing Turing Turing
Family Tesla Tesla Quadro Quadro Quadro N/A Tesla
Form Factor SXM2 PCIe x16
Dual-slot Full
PCIe x16
Dual-slot Full
PCIe x16
Dual-slot Full
PCIe x16
Dual-slot Full
PCIe x16
Dual-slot Full
PCIe x16
Single-slot Low-profile
CUDA Cores 5,120 5,120 4,608 4,608 4,608 2,560
Tensor Cores 640 640 576 576 576 320
RT Cores N/A N/A 72 72 72 N/A
Interconnect Bandwidth 300GB/s30GB/s
(no NVLINK)
200GB/s 100 GB/s 100 GB/s 100 GB/s 32GB/sec
Performance
Double-precision: 7.8 TFLOPS7 TFLOPS 7.4 TFLOPS N/A N/A 0.51 TFLOPS N/A
Single-precision: 15.7 TFLOPS14 TFLOPS 14.8 TFLOPS 16.3 TFLOPS 16.3 TFLOPS 16.3 TFLOPS 8.1 TFLOPS
Low-Precision FP16:29.6 TFLOPS
INT8:59.3 TOPS
FP16:29.6 TFLOPS
INT8:59.3 TOPS
FP16:29.6 TFLOPS
INT8:59.3 TOPS
FP16:32.6 TFLOPS
INT8:206.1 TOPS
FP16:16.3 TFLOPS FP16:16.3 TFLOPS INT8:130 TOPS
INT4:260 TOPS
Special: Deep Learning:118.5 TFLOPSDeep Learning:118.5 TFLOPS Deep Learning:118.5 TFLOPS RTX-OPS:84T RTX-OPS:84T Tensor:130 TFLOPS Mixed-Precision(FP16/FP32):65 TFLOPS
Memory 32 *OR* 16 GB HBM2 900GB/s ECC 32 GB HBM2 870GB/s 48GB GDDR6 ECC 672 GB/Sec 24GB GDDR6 ECC 672 GB/Sec 24GB GDDR6 672 GB/s ECC 16 GB GDDR6 300 GB/s ECC
TDP 300W250W 250W 295W 295W 290W 70W

Software Tools for GPU Computing.

Tensorflow

Tensorflow Artificial Intelligence Library

Tensorflow, developed by google, is an open source symbolic math library for high performance computation.

It has quickly become an industry standard for artificial intelligence and machine learning applications, and is known for its flexibility, used in many scientific disciplines.

It is based on the concept of a Tensor, which, as you may have guessed, is where the Volta Tensor Cores gets its name.

GPU Accelerated Libraries

GPU Accelerated Libraries

There are a handful of GPU accelerated libraries that developers can use to speed up applications using GPUs. Many of them are NVIDIA CUDA libraries (such as cuBLAS and CUDA Math Library), but there are others such as IMSL Fortran libraries and HiPLAR (High Performance Linear Algebra in R). These libraries can be linked to replace standard libraries that are commonly used in non-GPU-Accelerated computing.

CUDA

CUDA Development Toolkit

NVIDIA has created an entire toolkit devoted to computing on their CUDA-enabled GPUs. The CUDA toolkit, which includes the CUDA libraries, are the core of many GPU-Accelerated programs. CUDA is one of the most widely used toolkits in the GPGPU world today.

Deep Learning SDK

NVIDIA Deep Learning SDK

In today’s world, Deep Learning is becoming essential in many segments of the industry. For instance, Deep Learning is key in voice and image recognition where the machine must learn while gaining input. Writing algorithms for machines to learn from data is a difficult task, but NVIDIA has written a Deep Learning SDK to provide the tools necessary to help design code to run on GPUs.

OpenACC

OpenACC Parallel Programming Model

OpenACC is a user-driven directive-based performance-portable parallel programming model. It is designed for scientists and engineers interested in porting their codes to a wide-variety of heterogeneous HPC hardware platforms and architectures with significantly less programming effort than required with a low-level model. . The OpenACC Directives can be a powerful tool in porting a user’s application to run on GPU servers. There are two key features to OpenACC and that are that is it easy, and portable. Applications that use OpenACC can not only run on NVIDIA GPUs, but it can run on other GPUs, X86 CPUs & POWER CPUs, as well.

NVIDIA Accelerators dramatically lower data center costs by delivering exceptional performance with fewer, more powerful servers. This increased throughput means more scientific discoveries delivered to researchers every day.

Asetek Pascal Liquid Cooler

Keep it Cool

Asetek direct-to-chip liquid cooling focuses on removing heat from the hottest locations in servers. GPUs and other coprocessors are a growing hot spot in high performance servers as manufactures offload processor intensive tasks from the main processor for more performance. Power consumption of greater than 300 watts per GPU (or GPGPU co-processors) are becoming the norm and can easily be addressed with Asetek technology. Learn more about Asetek and Liquid Cooling.

Choose from Some of Our Most Popular GPU Capable Servers

Supermicro Storage Servers

4U SuperWorkstations

Up to 4 GPUs or Coprocessors.

Shop 4U 7048GR-TR

Supermicro 1U SuperServer 1029GQ-TXRT GPU Server

1U SuperServer 1029GQ-TXRT

Up to 4 P100s with 10GBase-T Ethernet

Shop 1U 1029GQ-TXRT

NVIDIA Tesla T4 Tensor Core GPU: The Price Performance Leader

The next level of acceleration, the NVIDIA T4 is a single-slot, 6.6-inch Gen3 PCIe Universal Deep Learning Accelerator based on the TU104 NVIDIA GPU. It supports both x8 and x16 PCI Express, and 32 GB/sec interconnect bandwidth. The T4’s small form factor design allows all of this, yet is energy efficient, consuming only 70 watts of power. Passive thermal design supports bi-directional airflow (R-L or L-R).

The T4 utilizes Turing™ Architecture and has 320 Turing™ Tensor Cores, as well as 2,560 CUDA® cores, supporting CUDA™, TensorRT™, and ONNX compute APIs.

Nvidia Tesla T4 used for GPU Clusters and GPU Servers

Multi-precision performance specifications

  • Single-precision (8.1 TFLOPS)
  • Mixed-Precision FP32 and FP16 (65 TFLOPS)
  • INT8 (130 TOPS)
  • INT4 (260 TOPS)

Memory:

The T4 boasts 16GB of GDDR6 ECC memory, with 256 bit memory bus, memory clock up to 5001 MHz., and peak memory bandwidth up to 320 GBytes per second.

The T4 provides up to 9.3X higher performance than CPUs on training and up to 36X on inference.

Inference Performance vs. CPU*

T4 vs. CPU Inference Performance

Training Performance vs. CPU*

T4 vs. CPU Training Performance
*Comparison made of dual NVIDIA T4 GPUs versus servers with dual-socket Xeon Gold 6140 CPU.

Some of Our Most Popular Tesla Capable Servers

Supermicro 1U SuperServer 1029GQ-TRT

1U 1029GQ-TRT SuperServer

Holds 4 GPUs and has Dual Port 10GbE.

Shop 1U 1029GQ-TRT

Supermicro SuperServer 4028GR-TR2

4U 4028GR-TR2 SuperServer

Dual Socket Intel Xeon E5-2600 v3/v4.

Shop 4U 4028GR-TR2