MVAPICH2 is a standard library for performing parallel processing using a distributed-memory model. 

Availability and Restrictions


The following versions of MVAPICH2 are available on OSC systems:

Version Owens Pitzer Ascend
2.3 X X  
2.3.1 X X  
2.3.2 X X  
2.3.3 X* X*  
2.3.5 X X  
2.3.6 X X X
2.3.7     X*
* Current default version

You can use module spider mvapich2 to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.


MPI is available to all OSC users. If you have any questions, please contact OSC Help.

Publisher/Vendor/Repository and License Type

NBCL, The Ohio State University/ Open source 



To set up your environment for using the MPI libraries, you must load the appropriate module:

module load mvapich2

You will get the default version for the compiler you have loaded.

Note:Be sure to swap the intel compiler module for the gnu module if you're using gnu.

Building With MPI

To build a program that uses MPI, you should use the compiler wrappers provided on the system. They accept the same options as the underlying compiler. The commands are shown in the following table.

C mpicc
C++ mpicxx
FORTRAN 77 mpif77
Fortran 90 mpif90

For example, to build the code my_prog.c using the -O2 option, you would use:

mpicc -o my_prog -O2 my_prog.c

In rare cases you may be unable to use the wrappers. In that case you should use the environment variables set by the module.

Variable Use
$MPI_CFLAGS Use during your compilation step for C programs.
$MPI_CXXFLAGS Use during your compilation step for C++ programs.
$MPI_FFLAGS Use during your compilation step for Fortran 77 programs.
$MPI_F90FLAGS Use during your compilation step for Fortran 90 programs.
$MPI_LIBS Use when linking your program to the MPI libraries.

For example, to build the code my_prog.c without using the wrappers you would use:

mpicc -c $MPI_CFLAGS my_prog.c

mpicc -o my_prog my_prog.o $MPI_LIBS

Batch Usage

Programs built with MPI can only be run in the batch environment at OSC. For information on starting MPI programs using the srun or mpiexec command, see Batch Processing at OSC.

Be sure to load the same compiler and mvapich modules at execution time as at build time.

Known Issues

Large MPI job startup failure

Updated: Nov 2019
Versions Affected: Mvapich2/2.3 & 2.3.1
We have found that large MPI jobs may hang at startup with mvapich2/2.3 and mvapich/2.3.1 (on any compiler dependency) due to a known bug that has been fixed in release 2.3.2. If users experience this issue, please switch to mvapich2/2.3.2

Further Reading

See Also

Fields of Science: