As the need for more GPU computing power changes every day, more researchers are trying to make use of different HPC Processors and general purpose GPUS (GPGPU) to improve the performance of their code. In the early years, the “GRAPE” special purpose “computer” was developed by the University of Tokyo in 1989. Since then, the MD-GRAPE cards have won multiple Gordon Bell prizes.
The biggest problem with these cards were the price. When graphics processors began supporting floating point operations, people began running matrix and vector calculations on GPUs which were less expensive and more available. Today, many vendors such as Intel, NVIDIA and AMD have furthered this technology, and many researchers have ported their code to run on these processors to gain performance in the HPC market.
NVIDIA has been at the forefront of accelerator computing with their GPGPUs. By creating the CUDA library, NVIDIA was able to jump ahead of their competition in the accelerated GPU server space. In the past, the GPU performance was bottlenecked by the PCIe interconnect. Today, NVIDIA’s Pascal architecture uses SMX2 which enables NVLink interconnect for a high-speed bidirectional bandwidth which is five times faster than PCIe.