Mellanox Technologies Logo

Mellanox Infiniband

Connect. Accelerate. Outperform.

Mellanox Technologies is a leading supplier of end-to-end Ethernet and InfiniBand intelligent interconnect solutions and services for servers, storage, and hyper-converged infrastructure. Mellanox Infiniband intelligent interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance.

Mellanox offers a choice of high performance solutions: network and multicore processors, network adapters, switches, cables, software and silicon, that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage, network security, telecom and financial services.

Cluster Computing

In Cluster Computing, two of the key elements in running a program across multiple nodes is network bandwidth and latency. That’s how much information gets across to another node and how fast the transaction will take. Mellanox currently offers three speeds of Mellanox InfiniBand: QDR at 40Gbps, FDR at 56Gbps, and EDR at 100Gbps.

EDR is currently the fastest available Mellanox InfiniBand product on the market, and also boasts the highest bandwidth. With Virtual Protocol Interconnect (VPI) technology, Mellanox cards not only allow for InfiniBand connectivity, but also allows up to 100Gbps of Ethernet connectivity.

Product Specifications

Model Single Port IB Bandwidth Single Port Ethernet Bandwith IB Latency Other Features
ConnectX-3 VPI 40Gbps or 56Gbps 40Gbps ~1 μs
ConnectX-3 PRO VPI 40Gpbs or 56Gbps 40Gbps ~1 μs OCP Form Factor
Connect-IB 56Gbps ~1 μs
ConnectX-4 VPI 100Gbps 100Gbps <90 ns 300

HPC Networks

When setting up an HPC network, it’s important to ask yourself how much blocking, or oversubscription, you’re willing to live with. Oversubscription is when multiple ports on an edge switch share fewer ports which uplink to the core switch. If configuring a cluster with 108 nodes, we can have four Edge switches, and connect 27 nodes to each 36-port Edge switch. Then, we can take one Core switch and have 9 uplinks from each Edge switch to the Core switch. With four Edge switches, that would take all 36 ports of the Core switch for the 108 nodes. Because we have 27 nodes on each Edge switch and 9 uplinks, we have a 27 to 9, or 3 to 1 oversubscription (Figure 1).

How will this affect job performance? Well, as long as not all 27 ports on the switch are using the bandwidth to the uplinked switch, you get less than 3 to 1 oversubscription. If instance if only 9 nodes on a single Edge switch is communicating with nodes on other Edge switches, you still have non-blocking. Why is this important? Because if you want no oversubscription or blocking on 108 nodes, you’d need either a 108 port Director switch at a much higher cost than five 36-port switches, or you can have six 36-port Edge switches, and another six 36-port Core switches for a total of 12 switches (Figure 2).

Mellanox HPC Networks

Oversubscribed Network

Mellanox Oversubscribed Rendering Figure 1

Non-Blocking Network

Mellanox Not Oversubscribed Rendering Figure 2

Highest levels of scalability. Simplified network manageability. Maximum system productivity. Mellanox is able to provide a complete end-to-end Infiniband solution.

Mellanox is Committed to Quality

Mellanox Technologies has two classes of switches: Edge switches and Director switches. Mellanox Edge switches come as 12 to 36 port switches, and are usually used as “top of the rack” edge switches in larger systems. These switches can also be used as “collector” or “core” switches on systems that aren’t large enough to need Director switches, as they can manage up to 648 nodes in QDR or FDR configurations, or up to 2048 nodes in an EDR configuration. While the QDR switches are being phased out, FDR switches are capable of providing QDR and FDR connectivity in switches.

Mellanox InfiniBand Edge Switch
Mellanox InfiniBand Edge Switch

Edge Switches

  SX6005 SX6012 SX6015 SX6018 SX6025 SX6036 SB7700-SB7800 SB7790-SB7890
  SX6005 SX6012 SX6015 SX6018 SX6025 SX6036 SB7700/SB7800 SB7790/SB7890
Ports 12 12 18 18 36 36 36 36
Height 1U 1U 1U 1U 1U 1U 1U 1U
Switching Capacity 1.3Tb/s 1.3Tb/s 2.016Tb/s 2.016Tb/s 4.032Tb/s 4.032Tb/s 7.2Tb/s 7.2Tb/s
Link Speed 56Gb/s 56Gb/s 56Gb/s 56Gb/s 56Gb/s 56Gb/s 100Gb/s 100Gb/s
Management No Yes No Yes No Yes Yes No
Management Ports 1 2 2 2
PSU Redundancy No Optional Yes Yes Yes Yes Yes Yes
Fan Redundancy No No Yes Yes Yes Yes Yes Yes
Integrated Gateway Optional Optional Optional

Mellanox’s family of InfiniBand switches deliver the highest performance and port density with complete fabric management solutions to enable compute clusters and converged data centers to operate at any scale while reducing operational costs and infrastructure complexity.

Mellanox InfiniBand Director Switch
Mellanox InfiniBand Director Switch

Mellanox Technologies have Scalability

Mellanox’s Scalable HPC interconnect solutions are paving the road to Exascale computing by delivering the highest scalability, efficiency, and performance for HPC systems today and in the future. Mellanox Technologies Scalable HPC solutions are proven and certified for a large variety of market segments, clustering topologies and environments (Linux, Windows). Mellanox and Aspen Systems are active members of the HPC Advisory Council and contribute to high-performance computing outreach and education around the world.

Mellanox Director switches come as 108 to 648 port switches, and can provide the most bandwidth and lowest latency for clusters up to 648 ports. If more nodes are needed, these switches serve as the “core” switch to connect to the Edge switches as mentioned above.

Contact us about Mellanox Infiniband

Director Switches

  SX6506 SX6512 CS7520 SX6518 CS7510 SX6536 CS7500
  SX506 SX6512 CS7520 SX6518 CS7510 SX6536 CS7500
Ports 108 216 216 324 324 648 648
Height 6U 9U 12U 16U 16U 29U 28U
Switching Capacity 12.12Tb/s 24.24Tb/s 43.2Tb/s 36.36Tb/s 34.8Tb/s 72.52Tb/s 130Tb/s
Link Speed 56Gb/s 56Gb/s 100Gb/s 56Gb/s 100Gb/s 56Gb/s 100Gb/s
Management 648 nodes 648 nodes 2048 nodes 648 nodes 2048 nodes 648 nodes 2048 nodes
Management HA Yes Yes Yes Yes Yes Yes Yes
Console Cables Yes Yes Yes Yes Yes Yes Yes
Spine Modules 3 6 6 9 9 18 18
Leaf modules (max) 6 12 6 18 9 36 18
PSU Redundancy Yes (N+N) Yes (N+N) Yes (N+N) Yes (N+N) Yes (N+N) Yes (N+N) Yes (N+N)
Fan Redundancy Yes Yes Yes Yes Yes Yes Yes

World-class cluster, network, and storage performance with guaranteed bandwidth and low-latency services. The next level of scalability and performance requires a new generation of data and application accelerations

Mellanox Connect-IB Single/Dual-Port InfiniBand Host Channel Adapter Cards

Connect-IB adapter cards provide the highest performing and most scalable interconnect solution for server and storage systems. High Performance Computing (HPC), Web 2.0, Cloud, Big Data, Financial Services, Virtualized Data Centers and Storage applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation.

Connect-IB delivers leading performance with maximum bandwidth, low latency, and computing efficiency for performance-driven server and storage applications. Maximum bandwidth is delivered across PCI Express 3.0 x16 and two ports of FDR InfiniBand, supplying more than 100Gb/s of throughput together with consistent low latency across all CPU cores. Connect-IB also enables PCI Express 2.0 x16 systems to take full advantage of FDR, delivering at least twice the bandwidth of existing PCIe 2.0 solutions.

Connect-IB offloads the CPU protocol processing and the data movement from the CPU to the interconnect, maximizing the CPU efficiency and accelerate parallel and data-intensive application performance. Connect-IB supports new data operations including noncontinuous memory transfers which eliminate unnecessary data copy operations and CPU overhead.

Mellanox Connect-IB Dual-Port InfiniBand Host Channel Adapter Card
Connect-IB Dual-Port InfiniBand Host Channel Adapter Card

Choose from Some of Our Most Popular Mellanox InfiniBand Switches

CONTACT YOUR SALES ENGINEER TODAY! (800) 992-9242  Request a Quote