OpenMPI

MPI is a standard library for performing parallel processing using a distributed memory model. The Oakley, Ruby, and Owens clusters at OSC can use the OpenMPI implementation of the Message Passing Interface (MPI).

Availability & Restrictions

OpenMPI is available without restriction to all OSC users.

Installations are available for the Intel, PGI, and GNU compilers.

The following versions of OpenMPI are available on OSC systems:

Version Oakley Ruby Owens COMPILERS SUPPORTED
1.10  X* X X

gnu/4.8.5
gnu/5.2.0 (oakley)
intel/15.0.3 (oakley, ruby)
intel/16.0.3 (oakley, owens)
pgi/15.4.0 (oakley, ruby)
pgi/16.5.0 (oakley, owens)

1.10-hpcx      X* gnu/4.8.5
intel/16.0.3
1.10.7     X gnu/4.8.5
intel/16.0.3
pgi/16.5.0
1.4.5 X    

intel/15.0.3

2.0    X* X gnu/4.8.5
gnu/6.3.0
intel/16.0.3
pgi/16.5.0
2.0-hpcx   X X gnu/4.8.5
gnu/6.3.0
2.0.3     X gnu/4.8.5
intel/16.0.3
2.1.2     X gnu/4.8.5
intel/16.0.3
* Current default version

Usage

Setup on OSC Clusters

To set up your environment for using the MPI libraries, you must load the appropriate module. On any OSC system, this is performed by:

module load openmpi

You will get the default version for the compiler you have loaded.

Building With MPI

To build a program that uses MPI, you should use the compiler wrappers provided on the system. They accept the same options as the underlying compiler. The commands are shown in the following table:

C mpicc
C++ mpicxx
FORTRAN 77 mpif77
Fortran 90 mpif90

For example, to build the code my_prog.c using the -O2 option, you would use:

mpicc -o my_prog -O2 my_prog.c

In rare cases, you may be unable to use the wrappers. In that case, you should use the environment variables set by the module.

Variable Use
$MPI_CFLAGS Use during your compilation step for C programs.
$MPI_CXXFLAGS Use during your compilation step for C++ programs.
$MPI_FFLAGS Use during your compilation step for Fortran 77 programs.
$MPI_F90FLAGS Use during your compilation step for Fortran 90 programs.

Batch Usage

Programs built with MPI can only run in the batch environment at OSC. For information on starting MPI programs using the command mpiexec see Job Scripts.

Be sure to load the same compiler and OpenMPI modules at execution time as at build time.

MPIEXEC

The mpiexec and mpirun commands are not part of the MPI standard and differ slightly between MPI implementations.

OpenMPI uses a different mpiexec implementation than other MPI libraries available at OSC. Basic functionality is the same but some options and defaults differ. To see all available options use mpiexec -h (with the openmpi module loaded) or see Open MPI Documentation.

Two differences in particular are worth noting.

By default the mpiexec process will spawn one MPI process per CPU core requested in a batch job. To specify the number of processes per node use the -npernode procs option. For one process per node use either -npernode 1 or -pernode.

If you run a hybrid OpenMPI / OpenMP program you should turn off binding with --bind-to none. Otherwise you may find your program using only half the available cores, leaving the others idle. This happens in particular with -npernode 1.

Further Reading

See Also

Service: 
Technologies: 
Fields of Science: