domingo, 15 de abril de 2012

MPICH2 and Open MPI

I previously made posts about MPI, more exactly about mpi4py, which needed a MPI distribution in order to work properly. For that I used Open MPI, but MPICH2 was also an alternative, so in this post I'll explain each.

MPICH

MIPICH2 is a high performance and widely portable implementation of the Message Passing Interface (MPI), a standard for message-passing for distributed-memory applications used in parallel computing, it works for both (MPI and MPI-2)


The CH part of the name was derived from "Chameleon", which was a portable parallel programming library developed by William Gropp, one of the founders of MPICH.

The original implementation of MPICH (MPICH1) implements the MPI-1.1 standard. The latest implementation (MPICH2) implements the MPI-2.2 standard.

MPICH2 replaces MPICH1 and should be used instead of MPICH1 except for the case of clusters with heterogeneous data representations (e.g., different lengths for integers or different byte ordering). MPICH2 does not yet support those systems. MPICH2 is distributed as source (with an open-source, freely available license). It has been tested on several platforms, including Linux (on IA32 and x86-64), Mac OS/X (PowerPC and Intel), Solaris (32- and 64-bit), and Windows. 

MPICH2 is one of the most popular implementations of MPI. It is used as the foundation for the vast majority of MPI implementations including IBM MPI (for Blue Gene), Intel MPI, Cray MPI, Microsoft MPI, Myricom MPI, OSU MVAPICH/MVAPICH2, and many others.


The goals of MPICH2 are to provide an MPI implementation for important platforms, including clusters, SMPs, and massively parallel processors. It also provides a vehicle for MPI implementation research and for developing new and better parallel programming environments. 

Installing MPICH2:

If you wish to install MPICH/MPICH2 yourself, download the source code from here. Just unpack the software, move inside the directory and type:

configure 
make
make install

Compiling MPICH2 application programs:

To compile a certain c source fila, type:
mpicc -g -o binary_file_name source_file.c 

For example, for the program PrimePipe.c included in the example directory, make an executable prp this way:
mpicc -g -o prp PrimePipe.c
(If you wish to use C++, use mpicxx instead of mpicc.)

Running MPICH2 application programs:

Set up a hosts file, listing which machines you wish your MPI app to run on, e.g. hosts3:


pc28.cs.ucdavis.edu
pc29.cs.ucdavis.edu
pc30.cs.ucdavis.edu

Run, say for the above executable prp on the above hosts file, by typing


mpiexec -f hosts3 -n 3 prp 100 0

where 100 and 0 are the command-line arguments to prp.


Open MPI


The Open MPI Project is an open source MPI-2 implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI is therefore able to combine the expertise, technologies, and resources from all across the High Performance Computing community in order to build the best MPI library available. Open MPI offers advantages for system and software vendors, application developers and computer science researchers.


Open MPI represents the merger between three well-known MPI implementations:
  • FT-MPI from the University of Tennessee
  • LA-MPI from Los Alamos National Laboratory
  • LAM/MPI from Indiana University
with contributions from the PACX-MPI team at the University of Stuttgart. These four institutions comprise the founding members of the Open MPI development team.
These MPI implementations were selected because the Open MPI developers thought that they excelled in one or more areas. The stated driving motivation behind Open MPI is to bring the best ideas and technologies from the individual projects and create one world-class open source MPI implementation that excels in all areas. The Open MPI project names several top-level goals:
  • Create a free, open source software, peer-reviewed, production-quality complete MPI-2 implementation.
  • Provide extremely high, competitive performance (low latency or high bandwidth).
  • Directly involve the high-performance computing community with external development and feedback (vendors, 3rd party researchers, users, etc.).
  • Provide a stable platform for 3rd party research and commercial development.
  • Help prevent the "forking problem" common to other MPI projects.
  • Support a wide variety of high-performance computing platforms and environments.

References:

1 comentario: