DELIVER FLEXIBLE, EFFICIENT AND SCALABLE CLUSTER MESSAGING
MPI, an acronym for Message Passing Interface, is a library specification for parallel computing architectures, which allows for communication of information between various nodes and clusters. Today, MPI is the most common protocol used in high performance computing (HPC). MPI has three major goals: portability, scalability, and high performance. MPI is able to be run on almost every distributed architecture, whether large or small, and each operation is optimized for the specific hardware on which it runs, giving you the greatest speed available. All of this combined, it is easy to see why MPI is the most convenient communication protocol used in HPC. Below are some of the HPC MPIs that are frequently requested our customers. The engineers at Aspen Systems have experience using all of them and more! If you are interested in something you do not see here, or if you would like more information on any of these software solutions, please feel free to reach out to our team.
INTEL MPI LIBRARY
The Intel MPI Library makes applications perform better on Intel architecture-based clusters, implementing the high-performance MPI-3.1 standard on multiple fabrics. It enables you to quickly deliver maximum end user performance, even if you change or upgrade to new interconnects, without requiring changes to the software or operating environment. Use this high-performance MPI message library to develop applications that can run on multiple cluster interconnects chosen by the user at runtime.
OPEN MPI | A HIGH PERFORMANCE MESSAGE PASSING LIBRARY
The Open MPI Project is an open source Message Passing Interface (MPI) implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI is therefore able to combine the expertise, technologies, and resources from all across the High Performance Computing (HPC) community in order to build the best MPI library available. Open MPI offers advantages for system and software vendors, application developers and computer science researchers.
HIGH PERFORMANCE AND WIDELY PORTABLE
MPICH and its derivatives form the most widely used implementations of MPI in the world. They are used exclusively on nine of the top 10 supercomputers (June 2015 ranking), including the world’s fastest supercomputer: Tianhe-2. The goals of MPICH are: to provide an MPI implementation that efficiently supports different computation and communication platforms including commodity clusters, high-speed networks and proprietary high-end computing systems to enable cutting-edge research in MPI through an easy-to-extend modular framework for other derived implementations.
BEST PERFORMANCE, SCALABILITY AND FAULT TOLERANCE
MVAPICH2 is an open source implementation of Message Passing Interface (MPI) that delivers the best performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand, 10GigE/iWARP and RoCE networking technologies. MVAPICH2 simplifies the task of porting MPI applications to run on clusters with NVIDIA GPUs by supporting standard MPI calls from GPU device memory. It optimizes the data movement between host and GPU, and between GPUs in the best way possible while requiring minimal or no effort from the application developer.