MPI is a standard library for performing parallel processing using a distributed memory model. The Ruby, Owens, and Pitzer clusters at OSC can use the OpenMPI implementation of the Message Passing Interface (MPI).
Availability and Restrictions
Versions
Installations are available for the Intel, PGI, and GNU compilers. The following versions of OpenMPI are available on OSC systems:
Version | Owens | Pitzer | Ascend | Cardinal | Notes |
---|---|---|---|---|---|
1.10.7-hpcx | X | X | |||
1.10.7 | X | X | |||
2.1.6-hpcx | X | X | |||
2.1.6 | X | X | |||
3.1.4-hpcx | X | X | |||
3.1.4 | X | X | |||
3.1.6-hpcx | X | X | |||
3.1.6 | X | HPC-X version** | |||
4.0.3-hpcx | X* | X* | |||
4.0.3 | X | X | |||
4.0.7-hpcx | X | ||||
4.1.2-hpcx | X | X | |||
4.1.3 | X* | HPC-X version** | |||
4.1.4-hpcx | X | X | |||
4.1.5/4.1.5-hpcx | X | X | X | HPC-X version** | |
4.1.6 | X | ||||
5.0.2 | X | ||||
5.0.2-hpcx | X |
** The HPCX version is OpenMPI built with communication libraries from NVIDIA HPC-X for the optimized performance.
You can use module spider openmpi
to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.
Access
OpenMPI is available to all OSC users. If you have any questions, please contact OSC Help.
Publisher/Vendor/Repository and License Type
https://www.open-mpi.org, Open source
Usage
Setup on OSC Clusters
To set up your environment for using the MPI libraries, you must load the appropriate module. On any OSC system, this is performed by:
module load openmpi
You will get the default version for the compiler you have loaded.
Building With MPI
To build a program that uses MPI, you should use the compiler wrappers provided on the system. They accept the same options as the underlying compiler. The commands are shown in the following table:
C | mpicc |
C++ | mpicxx |
FORTRAN 77 | mpif77 |
Fortran 90 | mpif90 |
For example, to build the code my_prog.c using the -O2 option, you would use:
mpicc -o my_prog -O2 my_prog.c
In rare cases, you may be unable to use the wrappers. In that case, you should use the environment variables set by the module.
Variable | Use |
---|---|
$MPI_CFLAGS |
Use during your compilation step for C programs. |
$MPI_CXXFLAGS |
Use during your compilation step for C++ programs. |
$MPI_FFLAGS |
Use during your compilation step for Fortran 77 programs. |
$MPI_F90FLAGS |
Use during your compilation step for Fortran 90 programs. |
Batch Usage
Programs built with MPI can only run in the batch environment at OSC. For information on starting MPI programs using the command srun see Job Scripts.
Be sure to load the same compiler and OpenMPI modules at execution time as at build time.
Run a MPI program
SRUN
We recommend the command srun
as the default MPI launcher. Please refer to Pitzer Programming Environment or Owens Programming Environment for detail.
Known Issues
OpenMPI-HPCX 4.1.x hangs on writing files on a shared file system
Update: 03/06/2024
Version: All 4.1.x-hpcx versions
Your job utilizing openmpi/4.1.x-hpcx (or 4.1.x on Ascend) might hang while writing files on a shared file system. This issue is caused by a bug stemming from the default OMPIO I/O module and UCX library. We have identified ORCA as being affected by this problem. If you are experiencing this issue, please consider the following solutions:
- Change the I/O module to ROMIO by adding
export OMPI_MCA_io=romio321
to your job script. - Switch to OpenMPI 5. You can check for available OpenMPI 5 moduless via
module spider openmpi/5
.
The use of MPI_THREAD_MULTIPLE with OpenMPI-HPCX 4.x is not supported
Update: 7/10/2023
Version: [Owens] openmpi/4.0.3-hpcx, openmpi/4.1.2-hpcx, openmpi/4.1.4-hpcx
[Ascend] openmpi/4.1.3
If a threading code uses MPI_Init_thread with MPI_THREAD_MULTIPLE, it will fail because the UCX framework from the HPCX package is built without multi-threading support. UCX is the default framework for OMPI 4.0 and above.
If you encounter this issue, you can now use "openmpi/4.0.7-hpcx" and "openmpi/4.1.5-hpcx" on Owens, and "openmpi/4.1.5" on Ascend. These versions are built with multi-threading UCX.
Cannot use mpiexec/mpirun from OpenMPI in an interactive session
Update: 2/22/2022
Version: All
The mpiexec
and mpirun
commands are not part of the MPI standard and may differ slightly between MPI implementations. On February 22, 2022, OSC upgraded Slurm to version 21.08.5, and we discovered additional issues with mpiexec
and mpirun
. Therefore, we recommend using srun
in all cases.
If you need to use mpiexec
and your job fails, please contact OSC Help for assistance.
Further Reading
- The Message Passing Interface (MPI) standard, http://www.mcs.anl.gov/research/projects/mpi/
- MPI Training Course