Mellanox Infiniband

Mellanox Infiniband. Connect. Accelerate. Outperform.

Mellanox Infiniband intelligent interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance. Mellanox Technologies is a leading supplier of end-to-end Ethernet and InfiniBand intelligent interconnect solutions and services for servers, storage, and hyper-converged infrastructure.

Mellanox offers a choice of high performance solutions: network and multicore processors, network adapters, switches, cables, software and silicon, that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage, network security, telecom and financial services.

Mellanox Technologies Logo

Cluster Computing

In Cluster Computing, two of the key elements in running a program across multiple nodes is network bandwidth and latency. That’s how much information gets across to another node and how fast the transaction will take. Mellanox currently offers multiple speeds of Mellanox InfiniBand: FDR at 56 Gbp/s, EDR at 100 Gbp/s, and HDR at 200 Gb/s.

HDR is currently the fastest available Mellanox InfiniBand product on the market, and also boasts the highest bandwidth. With Virtual Protocol Interconnect (VPI) technology, Mellanox cards not only allow for InfiniBand connectivity, but also allows up to 200 Gbps of Ethernet connectivity.

Mellanox PCI Interface Cards Specifications

ConnectX-3 Pro
ConnectX-4 cards
ConnectX-5 cards
ConnectX-6 cards
General Specs ConnectX-3 VPI ConnectX-4 VPI ConnectX-5 VPI ConnectX-6 VPI
Ports Single, Dual Single, Dual Single, Dual Single, Dual
Port Speed (Gbs) FDR10, FDR Eth: 10, 40, 56 FDR, EDR Eth: 10, 25, 40, 50, 56, 100 IB: FDR, EDR Eth: 10, 25, 40, 50, 100 IB: FDR, EDR, HDR 200, HDR 100 Eth: 10, 25, 40, 50, 100, 200
PCIe Gen3 x 8 Gen3 x 8, Gen3 x 16 Gen3 x 16, Gen4 x 16 Gen3 x 16, Gen4 x 16, 32 lanes as 2 x 16-lane PCIe
Connectors QSFP+ QSFP28 QSFP28 QSFP56
Message Rate (million msgs/sec) 36 150 200 (ConnectX-5 Ex, Gen4 server) 165 (ConnectX-5, Gen3 server) Contact Aspen Systems
Latency (us) 0.64 0.6 0.6 0.6
Power (2 ports, max. speed) 6.2W 16.3W 19.3W (ConnectX-5 Ex, Gen4 server), 16.2W (ConnectX-5, Gen3 server) Contact Aspen Systems

Mellanox Infiniband Data Rate Specifications

InfiniBand Line Rate QSFP Port Switch I/O #Ports/Switch
FDR 14 Gb/s 56 Gb/s 2.0 Tb/s 36
EDR 25 Gb/s 100 Gb/s 3.6 Tb/s 36
HDR 50 Gb/s 200 Gb/s 8.0 Tb/s 40

HPC Networks

When setting up an HPC network, it’s important to ask yourself how much blocking, or oversubscription, you’re willing to live with. Oversubscription is when multiple ports on an edge switch share fewer ports which uplink to the core switch. If configuring a cluster with 108 nodes, we can have four Edge switches, and connect 27 nodes to each 36-port Edge switch. Then, we can take one Core switch and have 9 uplinks from each Edge switch to the Core switch. With four Edge switches, that would take all 36 ports of the Core switch for the 108 nodes. Because we have 27 nodes on each Edge switch and 9 uplinks, we have a 27 to 9, or 3 to 1 oversubscription (Figure 1).

How will this affect job performance? Well, as long as not all 27 ports on the switch are using the bandwidth to the uplinked switch, you get less than 3 to 1 oversubscription. For instance, if only 9 nodes on a single Edge switch are communicating with nodes on other Edge switches, you still have non-blocking. Why is this important? Because if you want no oversubscription or blocking on 108 nodes, you’d need either a 108 port Director switch at a much higher cost than five 36-port switches, or you can have six 36-port Edge switches, and another six 36-port Core switches for a total of 12 switches (Figure 2).

Mellanox HPC Networks

Oversubscribed Network

Mellanox Oversubscribed Rendering Figure 1

Non-Blocking Network

Mellanox Not Oversubscribed Rendering Figure 2

Highest levels of scalability. Simplified network manageability. Maximum system productivity. Mellanox is able to provide a complete end-to-end Infiniband solution.

Mellanox is Committed to Quality

Mellanox Technologies has two classes of switches: Edge switches and Director switches. Mellanox Edge switches come as 12 to 36 port switches, and are usually used as “top of the rack” edge switches in larger systems. These switches can be used as “collector” or “core” switches on systems that aren’t large enough to need Director switches, as they can manage up to 648 nodes in FDR configurations, or up to 2048 nodes in an EDR configuration. FDR switches are capable of providing FDR connectivity in switches.

Mellanox InfiniBand Edge Switch

1U Switches

SB7700-SB7800 SB7790-SB7890 QM8700 QM8790
General Specs SB7700/SB7800 SB7790/SB7890 QM8700 QM8790
Ports 36 36 40 40
Family EDR EDR HDR HDR
Height 1U 1U 1U 1U
Switching Capacity 7.2Tb/s 7.2Tb/s 16Tb/s 16Tb/s
Link Speed 100Gb/s 100Gb/s 200Gb/s 200Gb/s
Interface Type QSFP28 QSFP28 QSFP56 QSFP56
Management Yes No Yes No
Management Ports 2 1
PSU Redundancy Yes Yes Yes Yes
Fan Redundancy Yes Yes Yes Yes
Integrated Gateway

Mellanox’s family of InfiniBand switches deliver the highest performance and port density with complete fabric management solutions to enable compute clusters and converged data centers to operate at any scale while reducing operational costs and infrastructure complexity.

Mellanox InfiniBand Director Switch

Mellanox Technologies have Scalability

Mellanox’s Scalable HPC interconnect solutions are paving the road to Exascale computing by delivering the highest scalability, efficiency, and performance for HPC systems today and in the future. Mellanox Technologies Scalable HPC solutions are proven and certified for a large variety of market segments, clustering topologies and environments (Linux, Windows). Mellanox and Aspen Systems are active members of the HPC Advisory Council and contribute to high-performance computing outreach and education around the world.

Mellanox Director switches come as 108 to 800 (200 Gb/s) or 1600 (100 Gb/s) port switches, and can provide the most bandwidth and lowest latency for clusters up to 800 (200 Gb/s) or 1600 (100 Gb/s) ports. If more nodes are needed, these switches serve as the “core” switch to connect to the Edge switches.

Contact us about Mellanox Infiniband

Director Switches

CS7510 CS7500 SX6536
General Specs CS7510 CS7500 CS8500
Ports 324 648 800 (200Gb/s) 1600 (100Gb/s)
Family EDR EDR HDR
Height 16U 28U 29U
Switching Capacity 34.8Tb/s 130Tb/s 320Tb/s
Link Speed 100Gb/s 100Gb/s 200Gb/s
Interface Type QSFP28 QSFP28 QSFP56
Management 2048 nodes 2048 nodes 2048 nodes
Management HA Yes Yes Yes
Console Cables Yes Yes Yes
Spine Modules 9 18 20
Leaf Modules (max) 9 18 20
Redundancy Yes (N+N) Yes (N+N) Yes (N+N)
Fan Redundancy Yes Yes Liquid cooled

World-class cluster, network, and storage performance with guaranteed bandwidth and low-latency services. The next level of scalability and performance requires a new generation of data and application accelerations

Choose from Some of Our Most Popular Mellanox InfiniBand Switches

Mellanox 36 Port MSX6036F-1SFS InfiniBand Switch

36-Port MSB7800-ES2F InfiniBand Smart Switch

 
36x EDR InfiniBand ports, up to 100Gb/s.
Shop 36 Port InfiniBand Switches
Mellanox 108 Port MSX6506-NR InfiniBand Switch

108 Port MSX6506-NR InfiniBand Switch

 
Up to 108x FDR/FDR10 Ports.
Shop 108 Port InfiniBand Switches
Mellanox-MCS7520-InfiniBand-Switch

216-Port MCS7520 InfiniBand Switch

 
Up to 216x EDR Ports.
Shop 216 Port InfiniBand Switches