Intel's implementation of the Message Passing Interface (MPI) library. See Intel Compilers for available compiler versions at OSC.
Availability and Restrictions
Versions
Intel MPI may be used as an alternative to - but not in conjunction with - the MVAPICH MPI libraries. The versions currently available at OSC are:
Version | Pitzer | Ascend | Cardinal |
---|---|---|---|
2017.4 | X | ||
2018.3 | X | ||
2018.4 | X | ||
2019.3 | X | ||
2019.7 | X* | ||
2021.3 | X | ||
2021.4.0 | |||
2021.5 | X | ||
2021.10.0 | X | X | |
2021.10 | X | ||
2021.11.0 | X | ||
2021.11 | X | ||
2021.12.1 | X | X | |
2021.14.2 | X |
* Current Default Version
You can use module spider intelmpi
to view available modules for pitzer or module spider intel-oneapi-mpi
on cardinal or ascend. Feel free to contact OSC Help if you need other versions for your work.
Access
Intel MPI is available to all OSC users. If you have any questions, please contact OSC Help.
Publisher/Vendor/Repository and License Type
Intel, Commercial
Usage
Usage on Pitzer
Set-up on Pitzer
To configure your environment for the default version of Intel MPI, use module load intelmpi
.
Note: This module conflicts with the default loaded MVAPICH installations, and Lmod will automatically replace with the correct one when you use module load intelmpi
.
Using Intel MPI
Software compiled against this module will use the libraries at runtime.
Building With Intel MPI
On Oakley, we have defined several environment variables to make it easier to build and link with the Intel MPI libraries.
VARIABLE | USE |
---|---|
$MPI_CFLAGS |
Use during your compilation step for C programs. |
$MPI_CXXFLAGS |
Use during your compilation step for C++ programs. |
$MPI_FFLAGS |
Use during your compilation step for Fortran programs. |
$MPI_F90FLAGS |
Use during your compilation step for Fortran 90 programs. |
$MPI_LIBS |
Use when linking your program to Intel MPI. |
In general, for any application already set up to use mpicc
compilation should be fairly straightforward.
Batch Usage on Pitzer
When you log into pitzer.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the multiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.
Non-interactive Batch Job (Parallel Run)
A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. The following example batch script file will run a program compiled against Intel MPI (called my-impi-application
) for five hours on Pitzer:
#!/bin/bash #SBATCH --job-name MyIntelMPIJob #SBATCH --nodes=2 --ntasks-per-node=48 #SBATCH --time=5:00:00 #SBATCH --account=<project-account> module load intelmpi srun my-impi-application
Usage on Ascend
Set-up on Ascend
To configure your environment for the default version of Intel MPI, use module spider intel-oneapi-mpi
to check what module(s) to load first. Use module load [module name and version]
to load what modules you need, then use module load intel-oneapi-mpi/[version]
to load intelmpi.
Note: This module conflicts with the default loaded MVAPICH installations, and Lmod will automatically replace with the correct one when you use module load intel-oneapi-mpi
.
Using Intel MPI
Software compiled against this module will use the libraries at runtime.
Building With Intel MPI
On Oakley, we have defined several environment variables to make it easier to build and link with the Intel MPI libraries.
VARIABLE | USE |
---|---|
$MPI_CFLAGS |
Use during your compilation step for C programs. |
$MPI_CXXFLAGS |
Use during your compilation step for C++ programs. |
$MPI_FFLAGS |
Use during your compilation step for Fortran programs. |
$MPI_F90FLAGS |
Use during your compilation step for Fortran 90 programs. |
$MPI_LIBS |
Use when linking your program to Intel MPI. |
In general, for any application already set up to use mpicc
compilation should be fairly straightforward.
Batch Usage on Ascend
When you log into ascend.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the multiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.
Non-interactive Batch Job (Parallel Run)
A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. The following example batch script file will run a program compiled against Intel MPI (called my-impi-application
) for five hours on Ascend:
#!/bin/bash #SBATCH --job-name MyIntelMPIJob #SBATCH --nodes=2 --ntasks-per-node=48 #SBATCH --time=5:00:00 #SBATCH --account=<project-account> module load intel-oneapi-mpi/2021.10.0 srun my-impi-application
Known Issues
Further Reading
- Intel MPI page at Intel.com
- Vendor documentation