Mellanox Infiniband. Connect. Accelerate. Outperform.
Mellanox Infiniband intelligent interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance. Mellanox Technologies is a leading supplier of end-to-end Ethernet and InfiniBand intelligent interconnect solutions and services for servers, storage, and hyper-converged infrastructure.
Mellanox offers a choice of high performance solutions: network and multicore processors, network adapters, switches, cables, software and silicon, that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage, network security, telecom and financial services.
In Cluster Computing, two of the key elements in running a program across multiple nodes is network bandwidth and latency. That’s how much information gets across to another node and how fast the transaction will take. Mellanox currently offers three speeds of Mellanox InfiniBand: QDR at 40Gbps, FDR at 56Gbps, and EDR at 100Gbps.
EDR is currently the fastest available Mellanox InfiniBand product on the market, and also boasts the highest bandwidth. With Virtual Protocol Interconnect (VPI) technology, Mellanox cards not only allow for InfiniBand connectivity, but also allows up to 100Gbps of Ethernet connectivity.
|Model||Single Port IB Bandwidth||Single Port Ethernet Bandwith||IB Latency||Other Features|
|ConnectX-3 VPI||40Gbps or 56Gbps||40Gbps||~1 μs||–|
|ConnectX-3 PRO VPI||40Gpbs or 56Gbps||40Gbps||~1 μs||OCP Form Factor|
|ConnectX-4 VPI||100Gbps||100Gbps||<90 ns||300|
When setting up an HPC network, it’s important to ask yourself how much blocking, or oversubscription, you’re willing to live with. Oversubscription is when multiple ports on an edge switch share fewer ports which uplink to the core switch. If configuring a cluster with 108 nodes, we can have four Edge switches, and connect 27 nodes to each 36-port Edge switch. Then, we can take one Core switch and have 9 uplinks from each Edge switch to the Core switch. With four Edge switches, that would take all 36 ports of the Core switch for the 108 nodes. Because we have 27 nodes on each Edge switch and 9 uplinks, we have a 27 to 9, or 3 to 1 oversubscription (Figure 1).
How will this affect job performance? Well, as long as not all 27 ports on the switch are using the bandwidth to the uplinked switch, you get less than 3 to 1 oversubscription. If instance if only 9 nodes on a single Edge switch is communicating with nodes on other Edge switches, you still have non-blocking. Why is this important? Because if you want no oversubscription or blocking on 108 nodes, you’d need either a 108 port Director switch at a much higher cost than five 36-port switches, or you can have six 36-port Edge switches, and another six 36-port Core switches for a total of 12 switches (Figure 2).
Highest levels of scalability. Simplified network manageability. Maximum system productivity. Mellanox is able to provide a complete end-to-end Infiniband solution.
Mellanox is Committed to Quality
Mellanox Technologies has two classes of switches: Edge switches and Director switches. Mellanox Edge switches come as 12 to 36 port switches, and are usually used as “top of the rack” edge switches in larger systems. These switches can be used as “collector” or “core” switches on systems that aren’t large enough to need Director switches, as they can manage up to 648 nodes in QDR or FDR configurations, or up to 2048 nodes in an EDR configuration. FDR switches are capable of providing QDR and FDR connectivity in switches.
Mellanox’s family of InfiniBand switches deliver the highest performance and port density with complete fabric management solutions to enable compute clusters and converged data centers to operate at any scale while reducing operational costs and infrastructure complexity.
Mellanox Technologies have Scalability
Mellanox’s Scalable HPC interconnect solutions are paving the road to Exascale computing by delivering the highest scalability, efficiency, and performance for HPC systems today and in the future. Mellanox Technologies Scalable HPC solutions are proven and certified for a large variety of market segments, clustering topologies and environments (Linux, Windows). Mellanox and Aspen Systems are active members of the HPC Advisory Council and contribute to high-performance computing outreach and education around the world.
Mellanox Director switches come as 108 to 648 port switches, and can provide the most bandwidth and lowest latency for clusters up to 648 ports. If more nodes are needed, these switches serve as the “core” switch to connect to the Edge switches.Contact us about Mellanox Infiniband
|Management||648 nodes||648 nodes||2048 nodes||648 nodes||2048 nodes||648 nodes||2048 nodes|
|Leaf modules (max)||6||12||6||18||9||36||18|
|PSU Redundancy||Yes (N+N)||Yes (N+N)||Yes (N+N)||Yes (N+N)||Yes (N+N)||Yes (N+N)||Yes (N+N)|
World-class cluster, network, and storage performance with guaranteed bandwidth and low-latency services. The next level of scalability and performance requires a new generation of data and application accelerations
Mellanox Connect-IB Single/Dual-Port InfiniBand Host Channel Adapter Cards
Connect-IB adapter cards provide the highest performing and most scalable interconnect solution for server and storage systems. High Performance Computing (HPC), Web 2.0, Cloud, Big Data, Financial Services, Virtualized Data Centers and Storage applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation.
Connect-IB delivers leading performance with maximum bandwidth, low latency, and computing efficiency for performance-driven server and storage applications. Maximum bandwidth is delivered across PCI Express 3.0 x16 and two ports of FDR InfiniBand, supplying more than 100Gb/s of throughput together with consistent low latency across all CPU cores. Connect-IB also enables PCI Express 2.0 x16 systems to take full advantage of FDR, delivering at least twice the bandwidth of existing PCIe 2.0 solutions.
Connect-IB offloads the CPU protocol processing and the data movement from the CPU to the interconnect, maximizing the CPU efficiency and accelerate parallel and data-intensive application performance. Connect-IB supports new data operations including noncontinuous memory transfers which eliminate unnecessary data copy operations and CPU overhead.