INFINIBAND NETWORKING SOLUTIONS
Workloads with high resolutions and large datasets require high-speed simulations and highly parallel algorithms.
NVIDIA InfiniBand-the world’s only fully offloadable, In-Network Computing platform-allows you to achieve unmatched performance in high-performance computing (HPC), artificial intelligence, and hyperscale cloud infrastructure with lower costs and complexity.
InfiniBand smart adapters from NVIDIA deliver high throughput, low latency, and high message rates with their best-in-class performance and efficiency.
- World-class cluster performance
- High-performance networking and storage access
- In-network computing
- Efficient use of compute resources
- Guaranteed bandwidth and low-latency services
Modern workloads require optimal scalability, performance, and low latency of InfiniBand host channel adapters (HCAs). NVIDIA In-Network Computing engines provide that fusion.
|Ports||Single, Dual||Single, Dual||Single, Dual||Single, Dual||1, 2, 4|
|Port Speed (Gb/s)||IB: SDR, DDR, QDR, FDR10, FDR
Eth: 1, 10, 40, 56
|IB: SDR, DDR, QDR, FDR10, FDR, EDR
Eth: 1, 10, 25, 40, 50, 56, 100
|IB: SDR, DDR, QDR, FDR, EDR
Eth: 1, 10, 25, 40, 50, 100
|IB: SDR, DDR, QDR, FDR, EDR, HDR100, HDR
Eth: 1, 10, 25, 40, 50, 100, 200
|IB: HDR, NDR, NDR200
Eth: 10, 25, 40, 50, 100, 200, 400
|PCIe||Gen3 x8||Gen3 x8
32 lanes as 2x Gen3 x 16-lane PCIe
|Connectors||QSFP+||QSFP28||QSFP28||QSFP56||SFP56, QSFP56, QSFP56-DD, QSFP56-DD, QSFP112, SFP112|
|Typical Power||6.2 W||16.3 W||19.3 W (Gen4)
16.2 W (Gen3)
Highest levels of scalability. Simplified network manageability. Maximum system productivity. Nvidia is able to provide a complete end-to-end Infiniband solution.
When setting up an HPC network, it’s important to ask yourself how much blocking, or oversubscription, you’re willing to live with. Oversubscription is when multiple ports on an edge switch share fewer ports which uplink to the core switch. If configuring a cluster with 108 nodes, we can have four Edge switches, and connect 27 nodes to each 36-port Edge switch. Then, we can take one Core switch and have 9 uplinks from each Edge switch to the Core switch. With four Edge switches, that would take all 36 ports of the Core switch for the 108 nodes. Because we have 27 nodes on each Edge switch and 9 uplinks, we have a 27 to 9, or 3 to 1 oversubscription (Figure 1).
How will this affect job performance? Well, as long as not all 27 ports on the switch are using the bandwidth to the uplinked switch, you get less than 3 to 1 oversubscription. For instance, if only 9 nodes on a single Edge switch are communicating with nodes on other Edge switches, you still have non-blocking. Why is this important? Because if you want no oversubscription or blocking on 108 nodes, you’d need either a 108 port Director switch at a much higher cost than five 36-port switches, or you can have six 36-port Edge switches, and another six 36-port Core switches for a total of 12 switches (Figure 2).
NVIDIA INFINIBAND SWITCHES
Unparalleled data throughput and density
Workloads requiring high-resolution simulations, large datasets, and highly parallel algorithms require ultrafast processing. InfiniBand – the world’s only fully offloadable, in-network computing platform – provides the dramatic leap in performance needed to achieve unmatched data center performance with less cost and complexity.
|Performance||400 Gb/s per port||200 Gb/s per port||100 Gb/s per port||100 Gb/s per port|
|Switch Radix||64 NDR||40 HDR||36 EDR||36 EDR|
|Data Throughput||51.2 Tb/s||16 TB/s||7.2 Tb/s||7.2 Tb/s|
(passive or optical)
|System Power Usage||1,084 W (passive)
1,720 W (optical)
|253 W (passive)||136 W||136 W|
|Mgmt. Ports||1x USB 3.0 x1
1x USB for 12C channel
1x RJ45 (UART)
1x console port: RS232
1x micro USB
|10/100/1000 Mb/s Ethernet
RS232 port over DB9
|100/1000 Mb/s Ethernet
RS232 port over DB9
|CPU||x86 Coffee Lake i3||Broadwell ComEx D-1508 2.2 GHx||Dual-Core x86||Dual-Core x86|
|System Memory||Single 8GB
|Data Sheet||QM9700 Datasheet||QM8700 Datasheet||SB7800 Datasheet||SB7780/SB7880 Datasheet