Deliver Flexible, Efficient and Scalable Cluster Messaging
The Message Passing Interface (MPI) is a library specification that allows HPC to pass information between its various nodes and clusters, on a wide variety of parallel computing architectures. Below are some of the HPC MPIs that are frequently requested by Aspen Systems’ customers, and we have experience using all of them.
Intel MPI Library
The Intel MPI Library makes applications perform better on Intel architecture-based clusters, implementing the high-performance MPI-3.1 standard on multiple fabrics. It enables you to quickly deliver maximum end user performance, even if you change or upgrade to new interconnects, without requiring changes to the software or operating environment. Use this high-performance MPI message library to develop applications that can run on multiple cluster interconnects chosen by the user at runtime.
Open MPI | A High Performance Message Passing Library
The Open MPI Project is an open source Message Passing Interface (MPI) implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI is therefore able to combine the expertise, technologies, and resources from all across the High Performance Computing (HPC) community in order to build the best MPI library available. Open MPI offers advantages for system and software vendors, application developers and computer science researchers.
High Performance and Widely Portable
MPICH and its derivatives form the most widely used implementations of MPI in the world. They are used exclusively on nine of the top 10 supercomputers (June 2015 ranking), including the world’s fastest supercomputer: Tianhe-2. The goals of MPICH are: to provide an MPI implementation that efficiently supports different computation and communication platforms including commodity clusters, high-speed networks and proprietary high-end computing systems to enable cutting-edge research in MPI through an easy-to-extend modular framework for other derived implementations.
Best Performance, Scalability and Fault Tolerance
MVAPICH2 is an open source implementation of Message Passing Interface (MPI) that delivers the best performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand, 10GigE/iWARP and RoCE networking technologies. MVAPICH2 simplifies the task of porting MPI applications to run on clusters with NVIDIA GPUs by supporting standard MPI calls from GPU device memory. It optimizes the data movement between host and GPU, and between GPUs in the best way possible while requiring minimal or no effort from the application developer.