On September 22nd, OSC switched to Slurm for job scheduling and resource management on the Pitzer Cluster, along with the deployment of the new Pitzer hardware. We are in the process of updating the example job scripts for each software. If a Slurm example is not yet available, please consult our general Slurm information page or contact OSC help.

Intel MPI

Intel's implementation of the Message Passing Interface (MPI) library. See Intel Compilers for available compiler versions at OSC.

Availability and Restrictions

Versions

Intel MPI may be used as an alternative to - but not in conjunction with - the MVAPICH2 MPI libraries. The versions currently available at OSC are:

Version Owens Pitzer
5.1.3 X  
2017.2 X  
2017.4 X X
2018.0 X  
2018.3 X X
2018.4   X
2019.3 X X
2019.7 X* X*
* Current Default Version

You can use module spider intelmpi to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access

Intel MPI is available to all OSC users. If you have any questions, please contact OSC Help.

Publisher/Vendor/Repository and License Type

Intel, Commercial

Known Software Issues

A partial-node MPI job failed to start using mpiexec

Update: October 2020
Version: 2019.3 2019.7

A partial-node MPI job may fail to start using mpiexec from intelmpi/2019.3 and intelmpi/2019.7 with error messages like

[mpiexec@o0439.ten.osc.edu] wait_proxies_to_terminate (../../../../../src/pm/i_hydra/mpiexec/intel/i_mpiexec.c:532): downstream from host o0439 was killed by signal 11 (Segmentation fault)
[mpiexec@o0439.ten.osc.edu] main (../../../../../src/pm/i_hydra/mpiexec/mpiexec.c:2114): assert (exitcodes != NULL) failed
/var/spool/torque/mom_priv/jobs/11510761.owens-batch.ten.osc.edu.SC: line 30: 11728 Segmentation fault  
/var/spool/slurmd/job00884/slurm_script: line 24:  3180 Segmentation fault      (core dumped)

If you are using SLURM, make sure the job has CPU resource allocation using #SBATCH --ntasks=N instead of

#SBATCH --nodes=1
#SBATCH --ntasks-per-node=N

If you are using PBS, please use Intel MPI 2018 or intelmpi/2019.3 with the module libfabric/1.8.1.

Using mpiexec with SLURM

Update: October 2020
Version: 2017.x 2018.x 2019.x
Intel MPI on SLURM batch system is configured to support PMI and Hydra process managers. It is recommended to use srun as MPI program launcher. If you prefer using mpiexec with SLURM, you might experience MPI init error or see a warning:
MPI startup(): Warning: I_MPI_PMI_LIBRARY will be ignored since the hydra process manager was found
Please set unset I_MPI_PMI_LIBRARY in a batch job script before running MPI programs to resolve the issue.

MPI-IO issues on home directories

Update: May 2020
Version: 2019.3
Certain MPI-IO operations with intelmpi/2019.3 may crash, fail or proceed with errors on the home directory. We do not expect the same issue on our GPFS file system, such as the project space and the scratch space. The problem might be related to the known issue reported by HDF5 group. Please read the section "Problem Reading A Collectively Written Dataset in Parallel" from HDF5 Known Issues for more detail.

Usage

Usage on Owens

Set-up on Owens

To configure your environment for the default version of Intel MPI, use module load intelmpi. To configure your environment for a specific version of Intel MPI, use module load intelmpi/version. For example, use module load intelmpi/5.1.3 to load Intel MPI version 5.1.3 on Owens.

You can use module spider intelmpi to view available modules on Owens.

Note: This module conflicts with the default loaded MVAPICH installations, and Lmod will automatically replace with the correct one when you use module load intelmpi.

Using Intel MPI

Software compiled against this module will use the libraries at runtime.

Building With Intel MPI

On Ruby, we have defined several environment variables to make it easier to build and link with the Intel MPI libraries.

VARIABLE USE
$MPI_CFLAGS Use during your compilation step for C programs.
$MPI_CXXFLAGS Use during your compilation step for C++ programs.
$MPI_FFLAGS Use during your compilation step for Fortran programs.
$MPI_F90FLAGS Use during your compilation step for Fortran 90 programs.
$MPI_LIBS Use when linking your program to Intel MPI.

In general, for any application already set up to use mpicc, (or similar), compilation should be fairly straightforward.

Batch Usage on Owens

When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the multiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.

Non-interactive Batch Job (Parallel Run)
A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. The following example batch script file will run a program compiled against Intel MPI (called my-impi-application) for five hours on Owens:
#PBS -N MyIntelMPIJob
#PBS -l nodes=4:ppn=28
#PBS -l walltime=5:00:00
module swap mvapich2 intelmpi
cd $PBS_O_WORKDIR
mpiexec my-impi-application

Usage on Pitzer

Set-up on Pitzer

To configure your environment for the default version of Intel MPI, use module load intelmpi.
Note: This module conflicts with the default loaded MVAPICH installations, and Lmod will automatically replace with the correct one when you use module load intelmpi.

Using Intel MPI

Software compiled against this module will use the libraries at runtime.

Building With Intel MPI

On Oakley, we have defined several environment variables to make it easier to build and link with the Intel MPI libraries.

VARIABLE USE
$MPI_CFLAGS Use during your compilation step for C programs.
$MPI_CXXFLAGS Use during your compilation step for C++ programs.
$MPI_FFLAGS Use during your compilation step for Fortran programs.
$MPI_F90FLAGS Use during your compilation step for Fortran 90 programs.
$MPI_LIBS Use when linking your program to Intel MPI.

In general, for any application already set up to use mpicc compilation should be fairly straightforward.

Batch Usage on Pitzer

When you log into pitzer.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the multiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.

Non-interactive Batch Job (Parallel Run)
A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. The following example batch script file will run a program compiled against Intel MPI (called my-impi-application) for five hours on Pitzer:
#SBATCH --job-name MyIntelMPIJob
#SBATCH --nodes=2
#SBATCH --time=5:00:00

module load intelmpi
srun my-impi-application

Further Reading

See Also