Nvidia Infiniband

INFINIBAND NETWORKING SOLUTIONS

Workloads with high resolutions and large datasets require high-speed simulations and highly parallel algorithms.

NVIDIA InfiniBand-the world’s only fully offloadable, In-Network Computing platform-allows you to achieve unmatched performance in high-performance computing (HPC), artificial intelligence, and hyperscale cloud infrastructure with lower costs and complexity.

aspen systems infiniband

Speak with one of our system engineers today

HPC Accelerated

InfiniBand smart adapters from NVIDIA deliver high throughput, low latency, and high message rates with their best-in-class performance and efficiency.

  • World-class cluster performance
  • High-performance networking and storage access
  • In-network computing
  • Efficient use of compute resources
  • Guaranteed bandwidth and low-latency services

INFINIBAND ADAPTERS

Modern workloads require optimal scalability, performance, and low latency of InfiniBand host channel adapters (HCAs). NVIDIA In-Network Computing engines provide that fusion.

Name ConnectX-3 ConnectX-4 ConnectX-5 ConnectX-6 ConnectX-7
Appearance infiniband connectx-3 aspen systems infiniband connectx-4 aspen systems infiniband connectx-5 aspen systems infiniband connectx-6 aspen systems infiniband connectx-7 aspen systems
Ports Single, Dual Single, Dual Single, Dual Single, Dual 1, 2, 4
Port Speed (Gb/s) IB: SDR, DDR, QDR, FDR10, FDR
Eth: 1, 10, 40, 56
IB: SDR, DDR, QDR, FDR10, FDR, EDR
Eth: 1, 10, 25, 40, 50, 56, 100
IB: SDR, DDR, QDR, FDR, EDR
Eth: 1, 10, 25, 40, 50, 100
IB: SDR, DDR, QDR, FDR, EDR, HDR100, HDR
Eth: 1, 10, 25, 40, 50, 100, 200
IB: HDR, NDR, NDR200
Eth: 10, 25, 40, 50, 100, 200, 400
PCIe Gen3 x8 Gen3 x8
Gen3 x16
Gen3 x16
Gen4 x16
Gen3/4 x16
32 lanes as 2x Gen3 x 16-lane PCIe
Gen5 x16/x32
Connectors QSFP+ QSFP28 QSFP28 QSFP56 SFP56, QSFP56, QSFP56-DD, QSFP56-DD, QSFP112, SFP112
Message Rate
(million msgs/sec)
36 150 200 (Gen4)
165 (Gen3)
215 330-370
Latency (us) .64 .6 .6 .6
Typical Power 6.2 W 16.3 W 19.3 W (Gen4)
16.2 W (Gen3)

Highest levels of scalability. Simplified network manageability. Maximum system productivity. Nvidia is able to provide a complete end-to-end Infiniband solution.

HPC Networks

When setting up an HPC network, it’s important to ask yourself how much blocking, or oversubscription, you’re willing to live with. Oversubscription is when multiple ports on an edge switch share fewer ports which uplink to the core switch. If configuring a cluster with 108 nodes, we can have four Edge switches, and connect 27 nodes to each 36-port Edge switch. Then, we can take one Core switch and have 9 uplinks from each Edge switch to the Core switch. With four Edge switches, that would take all 36 ports of the Core switch for the 108 nodes. Because we have 27 nodes on each Edge switch and 9 uplinks, we have a 27 to 9, or 3 to 1 oversubscription (Figure 1).

How will this affect job performance? Well, as long as not all 27 ports on the switch are using the bandwidth to the uplinked switch, you get less than 3 to 1 oversubscription. For instance, if only 9 nodes on a single Edge switch are communicating with nodes on other Edge switches, you still have non-blocking. Why is this important? Because if you want no oversubscription or blocking on 108 nodes, you’d need either a 108 port Director switch at a much higher cost than five 36-port switches, or you can have six 36-port Edge switches, and another six 36-port Core switches for a total of 12 switches (Figure 2).

Nvidia HPC Networks

Oversubscribed Network

Nvidia Oversubscribed Rendering Figure 1

Non-Blocking Network

Nvidia Not Oversubscribed Rendering Figure 2

Speak with one of our system engineers today

NVIDIA INFINIBAND SWITCHES

Unparalleled data throughput and density

Workloads requiring high-resolution simulations, large datasets, and highly parallel algorithms require ultrafast processing. InfiniBand – the world’s only fully offloadable, in-network computing platform – provides the dramatic leap in performance needed to achieve unmatched data center performance with less cost and complexity.

Name QM9700 QM8700 SB7800 SB7780/7880
Appearance infiniband switch qm9700 aspen systems infiniband switch qm8700 aspen systems infiniband switch sb7780 sb7880 aspen systems infiniband switch sb7800 aspen systems
Performance 400 Gb/s per port 200 Gb/s per port 100 Gb/s per port 100 Gb/s per port
Switch Radix 64 NDR 40 HDR 36 EDR 36 EDR
Data Throughput 51.2 Tb/s 16 TB/s 7.2 Tb/s 7.2 Tb/s
Connectors 32x OSFP
(passive or optical)
QSFP56 QSFP28 QSFP28
System Power Usage 1,084 W (passive)
1,720 W (optical)
253 W (passive) 136 W 136 W
PSU Redundancy Yes Yes Yes Yes
Fan Redundancy Yes Yes Yes Yes
Mgmt. Ports 1x USB 3.0 x1
1x USB for 12C channel
1x RJ45
1x RJ45 (UART)
1x RJ45
1x console port: RS232
1x micro USB
10/100/1000 Mb/s Ethernet
RS232 port over DB9
USB port
100/1000 Mb/s Ethernet
RS232 port over DB9
USB port
CPU x86 Coffee Lake i3 Broadwell ComEx D-1508 2.2 GHx Dual-Core x86 Dual-Core x86
System Memory Single 8GB
2,666 MT/s
DDR4 SO-DIMM
Single 8GB
Height 1U 1U 1U 1U
Data Sheet QM9700 Datasheet QM8700 Datasheet SB7800 Datasheet SB7780/SB7880 Datasheet