Intel MPI

https://www.osc.edu/supercomputing/software/intel-compilers Intel's implementation of the Message Passing Interface (MPI) library.

Availability and Restrictions

Intel MPI is available on Glenn, Oakley and Ruby Clusters. This library may be used as an alternative to - but not in conjunction with - the MVAPICH2 MPI libraries. The versions currently available at OSC are:

Version Oakley ruby owens notes
4.0.3.008 X     Default version on Oakley prior to 09/15/2015
4.1.0.024 X      
4.1.1.036 X      
4.1.2.040 X      
4.1.3 X X    
4.1.3.049 X X    
5.0.1   X    
5.0.1.035   X    
5.0.3 X* X*    
5.1.3     X*  
* : Current Default Version

You can use module spider intelmpi  to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access

The Intel MPI areavailable to all OSC users without restriction.

Usage

Usage on Oakley

Set-up on Oakley

To configure your environment for the default version of Intel MPI, use  module load intelmpi . To configure your environment for a specific version of Intel MPI, use  module load intelmpi/version . For example, use  module load intelmpi/4.1.3.049  to load Intel MPI version 4.1.3.049 on Oakley.

You can use module spider intelmpi  to view available modules on Oakley.

Note: This module conflicts with the default loaded MVAPICH installations, and Lmod will automatically replace with the correct one when you use module load intelmpi .

Using Intel MPI

Software compiled against this module will use the libraries at runtime.

Building With Intel MPI

On Oakley, we have defined several environment variables to make it easier to build and link with the Intel MPI libraries.

Variable Use
$MPI_CFLAGS Use during your compilation step for C programs.
$MPI_CXXFLAGS Use during your compilation step for C++ programs.
$MPI_FFLAGS Use during your compilation step for Fortran programs.
$MPI_F90FLAGS Use during your compilation step for Fortran 90 programs.
$MPI_LIBS Use when linking your program to Intel MPI.

In general, for any application already set up to use mpicc , (or similar), compilation should be fairly straightfoward. 

Batch Usage on Oakley

When you log into oakley.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.  Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.

Non-interactive Batch Job (Parallel Run)
A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. The following example batch script file will run a program compiled against Intel MPI (called my-impi-application ) for five-hours on Oakley:
#PBS -N MyIntelMPIJob
#PBS -l nodes=4:ppn=12
#PBS -l walltime=5:00:00
module swap mvapich2 intelmpi
cd $PBS_O_WORKDIR
mpiexec my-impi-application

Usage on Ruby

Set-up on Ruby

To configure your environment for the default version of Intel MPI, use  module load intelmpi . To configure your environment for a specific version of Intel MPI, use  module load intelmpi/version . For example, use  module load intelmpi/4.1.3.049  to load Intel MPI version 4.1.3.049 on Ruby.

You can use module spider intelmpi  to view available modules on Ruby.

Note: This module conflicts with the default loaded MVAPICH installations, and Lmod will automatically replace with the correct one when you use module load intelmpi .

Using Intel MPI

Software compiled against this module will use the libraries at runtime.

Building With Intel MPI

On Ruby, we have defined several environment variables to make it easier to build and link with the Intel MPI libraries.

Variable Use
$MPI_CFLAGS Use during your compilation step for C programs.
$MPI_CXXFLAGS Use during your compilation step for C++ programs.
$MPI_FFLAGS Use during your compilation step for Fortran programs.
$MPI_F90FLAGS Use during your compilation step for Fortran 90 programs.
$MPI_LIBS Use when linking your program to Intel MPI.

In general, for any application already set up to use mpicc , (or similar), compilation should be fairly straightfoward. 

Batch Usage on Ruby

When you log into ruby.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

Non-interactive Batch Job (Parallel Run)
A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. The following example batch script file will run a program compiled against Intel MPI (called my-impi-application ) for five-hours on Ruby:
#PBS -N MyIntelMPIJob
#PBS -l nodes=4:ppn=20
#PBS -l walltime=5:00:00
module swap mvapich2 intelmpi
cd $PBS_O_WORKDIR
mpiexec my-impi-application

Usage on Owens

Set-up on Owens

To configure your environment for the default version of Intel MPI, use  module load intelmpi . To configure your environment for a specific version of Intel MPI, use  module load intelmpi/version . For example, use  module load intelmpi/5.1.3  to load Intel MPI version 5.1.3 on Owens.

You can use module spider intelmpi  to view available modules on Owens.

Note: This module conflicts with the default loaded MVAPICH installations, and Lmod will automatically replace with the correct one when you use module load intelmpi .

Using Intel MPI

Software compiled against this module will use the libraries at runtime.

Building With Intel MPI

On Ruby, we have defined several environment variables to make it easier to build and link with the Intel MPI libraries.

VARIABLE USE
$MPI_CFLAGS  Use during your compilation step for C programs.
$MPI_CXXFLAGS  Use during your compilation step for C++ programs.
$MPI_FFLAGS  Use during your compilation step for Fortran programs.
$MPI_F90FLAGS  Use during your compilation step for Fortran 90 programs.
$MPI_LIBS  Use when linking your program to Intel MPI.

In general, for any application already set up to use mpicc , (or similar), compilation should be fairly straightfoward. 

Batch Usage on Owens

When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

Non-interactive Batch Job (Parallel Run)
batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. The following example batch script file will run a program compiled against Intel MPI (called my-impi-application ) for five-hours on Ruby:
#PBS -N MyIntelMPIJob
#PBS -l nodes=4:ppn=28
#PBS -l walltime=5:00:00
module swap mvapich2 intelmpi
cd $PBS_O_WORKDIR
mpiexec my-impi-application
 

Further Reading

See Also

Supercomputer: 
Service: 
Technologies: 
Fields of Science: