An HPC cluster without networking is just a bunch of individual servers. There are many different types and speeds of networking available, with just as many latency levels. If you just want to stay with Ethernet, we can provide speeds of 1Gb, 10Gb, 25Gb, 40Gb, and up to 100Gb per second. If you’re looking for CPU to CPU communication with low latency, Mellanox InfiniBand or Intel Omni-Path Architecture (OPA) are available, both with either 56Gb or 100Gb options. If you’re looking for GPU to GPU communications, Mellanox offers GPU Direct in their InfiniBand product. If you’d like Intel KNL to KNL communications, Intel OPA provides direct on-chip options. Confused about networking? Contact your Aspen Systems Sales representative to help you out. We can provide a full configuration and suggest the best way to network your cluster.