NVIDIA RTX Data Science Workstation

A Data Science Workstation Delivering Exceptional Performance

Aspen Systems, a certified NVIDIA Preferred Solution Provider, has teamed up with NVIDIA to deliver a powerful new family of NVIDIA RTX Data science workstations featuring the NVIDIA Quadro RTX 8000 GPU, designed to help millions of data scientists, analysts and engineers make better business predictions faster. Our systems are tested and optimized to meet the needs of mission-critical enterprise deployments.

A Data Science Supercomputer on your desktop has many benefits over cluster or cloud, as everything is based on a local machine. This allows data scientists to have direct access to local memory and CPU, removing network and infrastructure bottlenecks. You have the ability to customize local environment easily, yet still allows for remote access allowed for flexibility.

A new Era in Data Science

NVIDIA RTX Data Science Workstation

Data is increasing exponentially, and this is driving demand for data scientists and the tools necessary to work with and understand that data.

Available Now

NVIDIA-powered systems for data scientists are available immediately from Aspen Systems.Call now at 1-800-992-9242 to speak with an Aspen Systems Sales Engineer.

Contact us now to maximize productivity, reduce time to insight and lower the cost of data science projects with workstations built to ensure the highest level of compatibility, support and reliability.

The new NVIDIA Data Science Workstation is designed from the ground up to simplify your data science workload with a turn-key solution.

Hardware and accelerated software have come together in one unit, offering unprecedented performance for the cost.

As a preferred solution provider, Aspen System’s team of expert engineers provides, configures, and tests the RTX Data Science Workstation to exacting specifications in order to help data scientists transform massive amounts of information into valuable insights faster than ever.

NVIDIA also provides optional software support for their deep learning and machine learning frameworks, containers, and other software.

NVIDIA Preferred Solutions Provider

Aspen Systems provides everything you need to utilize this new breed of workstations, which combine the world’s most advanced NVIDIA® Quadro RTX™ GPUs with a robust data science software stack built on NVIDIA CUDA-X AI to deliver an integrated solution to ensure maximum compatibility and reliability. A complete out-of-the-box data science solution built and tested by our team of engineers dedicated to manufacturing excellence.

Request a quote to get the latest pricing.
Call 1-800-992-9242 to speak with a qualified sales engineer.
Chat live if you have any questions.

Turn-Key Solution

Pre-configured to get you started in minutes. No need to spend days installing and configuring a complex stack of software yourself to get started. Just power it on and it is ready for your data science workloads on day one.

Pre-installed Tools

Take advantage a full suite of optimized data science software stack powered by NVIDIA CUDA-X AI accelerated libraries — RAPIDS, DIGITS, TensorFlow, PyTorch, Caffe, CUDA, CuDNN, and more.

Next-Gen Hardware

Experience faster model development and training with a workstation powered by up to 4 high performance Quadro GPUs with high-bandwidth interconnect using NVLINK technology.

Next-Generation Hardware

Up to 4x NVIDIA Quadro GPUs

Quadro RTX 8000

Quadro RTX 8000

Quadro RTX 6000

Quadro RTX 6000

GV100

Quadro GV100

NVIDIA Quadro RTX

Powered by the latest NVIDIA Turing™ GPU architecture and designed for enterprise deployment, the NVIDIA Quadro RTX delivers up to 260 teraflops of compute performance and 96GB of memory using NVIDIA NVLink® bridging technology. This provides the capacity and bandwidth to handle massive datasets and compute-intensive workloads, and can easily process 3D visualization of the exponentially expanding data we have access to, such as with Virtual Reality.

Also in the RTX family is the GV100, with many of the same features as the RTX 8000 and RTX 6000, but with a focus on increased memory bandwidth and CUDA and Tensor Cores rather than RT Cores, and uses the HBM2 Memory.

Quadro cards are certified with a broad range of sophisticated professional applications, ruthlessly tested by Aspen System’s engineers, and backed by a global team of support specialists.

RTX 8000 vs GV100

The primary difference between RTX 8000 (and 6000) and the GV100 is the memory. RTX 8000 uses GDDR6 memory, while the GV100 uses HBM2. HBM2 has much higher bandwidth (more I/Os), which requires an intermediary substrate between the memory and the circuit board. HBM2 is slightly more expensive, and more complex to implement, but may be a better choice if you need the higher performance/high bandwidth memory.

GPUs and GPU accelerated libraries give you:

Faster Time to Insight
Dramatically reduce the time it takes to extract meaningful content from large amounts of data.
More Accurate Models
Spend more time fine-tuning and iterating rather than waiting for hardware.
Reduce End-To-End Process Time
GPUs are up to 10X faster than CPU alone for Machine Learning and AI applications when using accelerated libraries.

New with the RTX:

Real-time Ray-Tracing

For the first time, Ray-Tracing operations can be done in real-time, in the hardware itself, rather than the relatively slow software-based process that has been the standard for decades. This latest generation of cores has introduced a new metric into the computing space – RTX-OPS – Ray-tracing Operations per second.

Turing Architecture

NVIDIA’S latest GPU architecture builds on the previous generation (Volta and Pascal). Not only does the RTX 8000 have 72 of the New RT Cores, but also packs in an impressive 4,608 CUDA Cores and 576 Tensor Cores for advanced artificial intelligence applications, an important key to quickly process today’s data science workloads.

GDDR6 SDRAM

Both the RTX 8000 and RTX 6000 have the latest generation of advanced high-bandwidth memory, the RTX series of GPUs are the first consumer graphics cards using this next-generation technology. With up to 672 GB/s Error-Correcting Code (EEC), this memory is powerful enough to avoid common bottlenecks. Get up to 96 GB of ultra-fast local memory to handle the largest data sets and compute-intensive workloads.

HBM2 High Bandwidth Memory

The GV100 has the latest generation advanced high-bandwidth memory utilizing a new approach which is differs from normal DDR memory. An increased number of connections in the interface gives you speeds of up to 870 GB/s – great for applications requiring extreme memory performance.

The industry’s first implementation of the new VirtualLink® port

VR Ready (Virtualink Technology) VirtualLink is a high-bandwidth port that allows for the next generation of high-resolution VR head-mounted displays to harness the power of the RTX GPU, allowing data scientists to get insights into data in virtual environments that used to be considered science fiction.

NVIDIA VLINK
2 NVDIA RTX 8000 GPUs connected with NVLINK High-speed interconnect.
Processing RTX 8000 RTX 6000 GV100
NVIDIA CUDA Cores 4,608 4,608 5,120
NVIDIA Tensor Cores 576 576 640
NVIDIA RT Cores 72 72 N/A
Single-Precision Performance 16.3 TFLOPS 16.3 TFLOPS 14.8 TFLOPS
Tensor Performance 130.5 TFLOPS 130.5 TFLOPS 118.5 TFLOPS
Memory RTX 8000 RTX 6000 GV100
GPU Memory 48 GB GDDR6 24 GB GDDR6 32 GB HBM2
Memory Interface 384-bit 384-bit 4096-bit
Memory Bandwidth 672 GB/s 672 GB/s 870 GB/s
ECC Yes Yes Yes
NVLink
NVIDIA NVLink Connects Connects 2 Quadro RTX 8000 GPUs Connects 2 Quadro RTX 6000 GPUs Connects 2 Quadro GV100 GPUs
NVIDIA NVLink bandwidth 100 GB/s (bidirectional) 100 GB/s (bidirectional) 200 GB/s
System RTX 8000 RTX 6000 GV100
System Interface PCI Express 3.0 x 16 PCI Express 3.0 x 16 PCI Express 3.0 x 16
Power Consumption Total board power: 295 W 295 W 250 W
Total graphics power: 260 W 260 W 250 W
Thermal Solution Active Active Active
Form Factor 4.4” H x 10.5” L,
Dual Slot, Full Height
4.4” H x 10.5” L,
Dual Slot, Full Height
4.4” H x 10.5” L,
Dual Slot, Full Height
Display RTX 8000 RTX 6000 GV100
Display Connectors 4x DP 1.4, 1x VirtualLink 4x DP 1.4, 1x USB-C 4x DP 1.4
Max Simultaneous Displays 4x 3840 x 2160 @ 120 Hz
4x 5120×2880 @ 60 Hz
2x 7680×4320 @ 60 Hz
4x 4096×2160 @ 120 Hz
4x 5120×2880 @ 60 Hz
2x 7680×4320 @ 60 Hz
4x 4096×2160 @ 120 Hz
4x 5120×2880 @ 60 Hz
2x 7680×4320 @ 60 Hz
Encode / Decode Engines 1X Encode, 1X Decode 1X Encode, 1X Decode Not Available
VR Ready Yes
(With VirtualLink technology)
Yes Yes
Graphics APIs DirectX 12.07
Shader Model 5.1
OpenGL 4.6
Vulkan 1.1
DirectX 12.07
Shader Model 5.1
OpenGL 4.6
Vulkan 1.1
Shader Model 5.1
OpenGL 4.57
DirectX 12.08
Vulkan 1.07

Pre-loaded with powerful Data Science Tools

The primary goal of the RTX Data Science Workstation was to create a pre-installed, powerful software stack, to bypass the overhead Data Scientists have to go through in the process of configuring the stack themselves.

The RTX Workstation comes with an impressive pre-installed software stack designed for data analytics, artificial intelligence, and machine learning training and inference with the CUDA-X AI family of tools and libraries.

NVIDIA DEEP LEARNING STACK

NVIDIA CUDA-X AI

CUDA-X AI is a collection of libraries that are designed to fully utilize NVIDIA’s GPU-accelerated computing platform and seamlessly integrate with deep learning frameworks like TensorFlow, PyTorch and MXNet. Built on the Linux operating system and Docker containers, this collection of ready-to-use GPU-acceleration libraries offer next-level deep learning, machine learning, and data analysis, all working seamlessly with NVIDIA CUDA Core and Tensor Core GPUs to accelerate the data science workflow and help you deploy applications with tighter, faster iterations.

NVIDIA RAPIDS

RAPIDS is a suite of open source, GPU-accelerated libraries for data science — preparation, analytics, machine learning and deep learning. With it, you have the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs. Initiated and maintained primarily by NVIDIA® after years of accelerated data science experience, RAPIDS fully utilizes NVIDIA CUDA® primitives for low-level compute optimization, and exposes the GPU and memory through user-friendly Python interfaces.

RAPIDS is also great for common data preparation tasks used in data science and supports multi-node, multi-GPU configurations, scaling to accelerate processing and training on massive data sets.

cuDF

The RAPIDS cuDF library accelerates loading, filtering, and manipulation of data for model training data preparation.

cuML

The RAPIDS cuML is a collection of GPU-accelerated machine learning libraries that enable data scientists to run traditional ML tasks on GPUs without going into the details of CUDA programming.

cuGraph

The RAPIDS cuGraph library is a collection of graph analytics that process data found in GPU Dataframes – see cuDF. cuGraph aims to provide a NetworkX-like API that will be familiar to data scientists, so they can now build GPU-accelerated workflows more easily.

DIGITS

The NVIDIA Deep Learning GPU Training System (DIGITS) puts the power of deep learning into the hands of engineers and data scientists.

cuDNN

The NVIDIA CUDA® Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers.

cuSOLVER

The NVIDIA cuSOLVER library provides a collection of dense and sparse direct solvers which deliver significant acceleration for Computer Vision, CFD, Computational Chemistry, and Linear Optimization applications.

cuSPARSE

The NVIDIA CUDA Sparse Matrix library (cuSPARSE) provides GPU-accelerated linear algebra subroutines for sparse matrices (matrices which are populated mostly by zeros).

NCCL

The NVIDIA Collective Communications Library (NCCL) implements multi-GPU and multi-node collective communication primitives performance optimized for NVIDIA GPUs and NVLink high-speed interconnect.

NPP

NVIDIA Performance Primitives (NPP) library provides over 5000 primitives for GPU-accelerated image, video, and signal processing.

cuFFT

Provides GPU-accelerated Fast Fourier Transform (FFT) implementations that perform up to 10x faster than CPU-only alternatives.

cuBLAS

GPU-accelerated implementation of the basic linear algebra subroutines (BLAS).

Frameworks

Tensorflow PyTorch MXNet
Keras Caffe Caffe 2
Theano Paddle Paddle Chainer

(Coming Soon) RTX Servers for the data Center

NVIDIA RTX Data Science Server
  • Up to 8 RTX 8000 GPUs in a 2U package.
  • NVIDIA NVLink™ lets applications scale performance.
  • Up to 96 GB of GDDR6 memory with multi-GPU configurations.

Request a quote or Chat live with a sales engineer for ordering info.