MPI is a standard library for performing parallel processing using a distributed memory model. The Oakley, Ruby, and Owens clusters at OSC can use the OpenMPI implementation of the Message Passing Interface (MPI).
Availability & Restrictions
OpenMPI is available without restriction to all OSC users.
Installations are available for the Intel, PGI, and GNU compilers.
The following versions of OpenMPI are available on OSC systems:
Setup on OSC Clusters
To set up your environment for using the MPI libraries, you must load the appropriate module. On any OSC system, this is performed by:
module load openmpi
You will get the default version for the compiler you have loaded.
Building With MPI
To build a program that uses MPI, you should use the compiler wrappers provided on the system. They accept the same options as the underlying compiler. The commands are shown in the following table:
For example, to build the code my_prog.c using the -O2 option, you would use:
mpicc -o my_prog -O2 my_prog.c
In rare cases, you may be unable to use the wrappers. In that case, you should use the environment variables set by the module.
||Use during your compilation step for C programs.|
||Use during your compilation step for C++ programs.|
||Use during your compilation step for Fortran 77 programs.|
||Use during your compilation step for Fortran 90 programs.|
Programs built with MPI can only run in the batch environment at OSC. For information on starting MPI programs using the command
mpiexec see Job Scripts.
Be sure to load the same compiler and OpenMPI modules at execution time as at build time.
OpenMPI uses a different
mpiexec implementation than other MPI libraries available at OSC. Basic functionality is the same but some options and defaults differ. To see all available options use
mpiexec -h (with the openmpi module loaded) or see Open MPI Documentation.
Two differences in particular are worth noting.
By default the mpiexec process will spawn one MPI process per CPU core requested in a batch job. To specify the number of processes per node use the
-npernode procs option. For one process per node use either
-npernode 1 or
If you run a hybrid OpenMPI / OpenMP program you should turn off binding with
--bind-to none. Otherwise you may find your program using only half the available cores, leaving the others idle. This happens in particular with
- The Message Passing Interface (MPI) standard, http://www.mcs.anl.gov/research/projects/mpi/
- MPI Training Course