NVIDIA GPU – One Platform. Unlimited Data Center Acceleration.

NVIDIA GPU – Accelerating scientific discovery, visualizing big data for insights, and providing smart services to consumers are everyday challenges for researchers and engineers. Solving these challenges takes increasingly complex and precise simulations, the processing of tremendous amounts of data, or training sophisticated deep learning networks. These workloads also require accelerating data centers to meet the growing demand for exponential computing.

NVIDIA Tesla is the world’s leading platform for accelerated data centers, deployed by some of the world’s largest supercomputing

centers and enterprises. It combines GPU accelerators, accelerated computing systems, interconnect technologies, development tools, GPU applications and Compilers, like PGI to enable faster scientific discoveries and big data insights.

At the heart of the Tesla platform are the massively parallel GPU accelerators that provide dramatically higher throughput for compute‑intensive workloads—without increasing the power budget and physical footprint of data centers.

Choose the Right NVIDIA Tesla Solution for You

Workload Apps Optimized to Run Mostly on the GPU Mixed-Workload
Used By Hyperscale & HPC Data Centers Running Apps that Scale to Multiple GPUs Supercomputing, Academia, Government Oil and Gas Artificial Intelligence/Deep Learning Design and Manufacturing, Architecture, Engineering and Construction, Defense, Higher Education
Optimized For Highest Absolute Performance Time to Insight Imaging Accuracy Training Time Jobs/Second/Watt Graphics Accelerated Virtual Desktops and Applications
Workload Profile Hyperscale & HPC Data Centers Running Apps that Scale to Multiple GPUs Mixed Workloads Specific Applications such as RTM Deep Learning Frameworks such as Caffe and TensorFlow Mixed Inference Workloads such as Image, Video or Data Processing Flexible Deployments: User Experience/Graphics Performance vs Concurrent User Density
Key Requirements
  • Performance (Double – and Single Precision)
  • Memory Size and Bandwidth
  • Interconnect Bandwidth
  • Programmability
  • Performance (Double – and Single Precision)
  • Memory Size and Bandwidth
  • Interconnect Bandwidth
  • Performance (Single-Precision)
  • Memory Size Per GPU
  • Interconnect Bandwidth
  • Power Footprint
  • Form Factor
  • Virtual Graphics (vGPU)
  • Graphics Accelerated Applications Delivered Anywhere on any Device
  • Server Form Factor: Rack and Blade
Recommended Solution P100
Nvidia GPU - Nvidia Tesla P100 GPU Computing Accelerator
Mixed Workloads
Nvidia Tesla K80 GPU Computing Accelerator
Nvidia Tesla M40 GPU Computing Accelerator
Nvidia Tesla M4 GPU Computing Accelerator
Rack Form Factor
Nvidia Tesla M60 GPU Computing Accelerator
Blade Form Factor
Nvidia Tesla M6 GPU Computing Accelerator

The Exponential Growth of GPU Computing

NVIDIA GPU Computing Preferred Solutions Provider Logo

For more than two decades, NVIDIA has pioneered visual computing, the art and science of computer graphics. With a singular focus on this field, NVIDIA GPUs offers specialized platforms for the gaming, professional visualization, data center, GPU server and automotive markets. NVIDIA’s work is at the center of the most consequential mega-trends in GPU cluster technology — virtual reality, artificial intelligence and self-driving cars.

GPU servers have become an essential part in the computational research world. From bioinformatics to weather modeling, GPUs have offered over 70x speed up on researcher’s code. With hundreds of applications already accelerated by these cards, check to see if your favorite applications are on the GPU applications list.

Tools for GPU Computing.

Accelerated Libraries

GPU Accelerated Libraries

  • There are a handful of GPU accelerated libraries that developers can use to speed up applications using GPUs. Many of them are NVIDIA CUDA libraries (such as cuBLAS and CUDA Math Library), but there are others such as IMSL Fortran libraries and HiPLAR (High Performance Linear Algebra in R). These libraries can be linked to replace standard libraries that are commonly used in non-GPU-Accelerated computing.



  • NVIDIA has created an entire toolkit devoted to computing on their CUDA-enabled GPUs. The CUDA toolkit, which includes the CUDA libraries, are the core of many GPU-Accelerated programs. CUDA is one of the most widely used toolkits in the GPGPU world today.

Deep Learning

NVIDIA Deep Learning SDK

  • In today’s world, Deep Learning is becoming essential in many segments of the industry. For instance, Deep Learning is key in voice and image recognition where the machine must learn while gaining input. Writing algorithms for machines to learn from data is a difficult task, but NVIDIA has written a Deep Learning SDK to provide the tools necessary to help design code to run on GPUs.



  • The OpenACC Directives can be a powerful tool in porting a user’s application to run on GPU servers. There are two key features to OpenACC and that are that is it easy, and portable. Applications that use OpenACC can not only run on NVIDIA GPUs, but it can run on other GPUs and CPUs as well.

NVIDIA now offers the TESLA P100 card that boasts 5.3 TeraFLOPS of double-precision performance for your code. Until these become more available, NVIDIA still has other GPUs such as K40 and K80 cards. If your applications depend on single precision code, the M40 and M60 cards are the right cards for you.

NVIDIA CEO Jen-Hsun Huang introduces the Tesla P100, the most advanced hyperscale datacenter GPU, built with five revolutionary technology breakthroughs, at the 2016 GPU Technology Conference.

Tesla GPU Accelerators for HPC Servers

Accelerate your most demanding HPC, hyperscale and enterprise data center workloads with NVIDIA Tesla GPU accelerators. Scientists can now crunch through petabytes of data up to 10x faster than with CPUs in applications ranging from energy exploration to deep learning.

Tesla GPU Computing Accelerators
NVIDIA GPU Computing Chemistry
NVIDIA Tesla Accelerator Performance