Environment changes in Slurm migration

As we migrate to Slurm from Torque/Moab, there will be necessary software environment changes.

Decommissioning old MVAPICH2 versions

Old MVAPICH2 including mvapich2/2.1mvapich2/2.2 and its variants do not support Slurm very well due to its life span, so we will remove the following versions:

  • mvapich2/2.1
  • mvapich2/2.2, 2.2rc1, 2.2ddn1.3, 2.2ddn1.4, 2.2-debug, 2.2-gpu

As a result, the following dependent software will not be available anymore.

Unavailable Software Possible replacement
amber/16 amber/18
darshan/3.1.4 darshan/3.1.6
darshan/3.1.5-pre1 darshan/3.1.6
expresso/5.2.1 expresso/6.3
expresso/6.1 expresso/6.3
expresso/6.1.2 expresso/6.3
fftw3/3.3.4 fftw3/3.3.5
gamess/18Aug2016R1 gamess/30Sep2019R2
gromacs/2016.4 gromacs/2018.2
gromacs/5.1.2 gromacs/2018.2
lammps/14May16 lammps/16Mar18
lammps/31Mar17 lammps/16Mar18
mumps/5.0.2 N/A (no current users)
namd/2.11 namd/2.13
nwchem/6.6 nwchem/6.8
pnetcdf/1.7.0 pnetcdf/1.10.0
siesta-par/4.0 siesta-par/4.0.2

If you used one of the software listed above, we strongly recommend testing during the early user period. We listed a possible replacement version that is close to the unavailable version. However, if it is possible, we recommend using the most recent versions available. You can find the available versions by module spider {software name}. If you have any questions, please contact OSC Help.

Miscellaneous cleanup on MPIs

We clean up miscellaneous MPIs as we have a better and compatible version available. Since it has a compatible version, you should be able to use your applications without issues.

Removed MPI versions Compatible MPI versions

mvapich2/2.3b

mvapich2/2.3rc1

mvapich2/2.3rc2

mvapich2/2.3

mvapich2/2.3.3

mvapich2/2.3b-gpu

mvapich2/2.3rc1-gpu

mvapich2/2.3rc2-gpu

mvapich2/2.3-gpu

mvapich2/2.3.1-gpu

mvapich2-gdr/2.3.1, 2.3.2, 2.3.3

mvapich2-gdr/2.3.4

openmpi/1.10.5

openmpi/1.10

openmpi/1.10.7

openmpi/1.10.7-hpcx

openmpi/2.0

openmpi/2.0.3

openmpi/2.1.2

openmpi/2.1.6

openmpi/2.1.6-hpcx

openmpi/4.0.2

openmpi/4.0.2-hpcx

openmpi/4.0.3

openmpi/4.0.3-hpcx

Software flag usage update for Licensed Software

We have software flags required to use in job scripts for licensed software, such as ansys, abauqs, or schrodinger. With the slurm migration, we updated the syntax and added extra software flags.  It is very important everyone follow the procedure below. If you don't use the software flags properly, jobs submitted by others can be affected. 

We require using software flags only for the demanding software and the software features in order to prevent job failures due to insufficient licenses. When you use the software flags, Slurm will record it on its license pool, so that other jobs will launch when there are enough licenses available. This will function correctly when everyone uses the software flag.

During the early user period until Dec 15, 2020, the software flag system may not work correctly. This is because, during the test period, licenses will be used from two separate Owens systems. However, we recommend you to test your job scripts with the new software flags, so that you can use it without any issues after the slurm migration.

The new syntax for software flags is

#SBATCH -L {software flag}@osc:N

where N is the requesting number of the licenses. If you need more than one software flags, you can use

#SBATCH -L {software flag1}@osc:N,{software flag2}@osc:M

For example, if you need 2 abaqus and 2 abaqusextended license features, then you can use

$SBATCH -L abaqus@osc:2,abaqusextended@osc:2

We have the full list of software associated with software flags in the table below.

  Software flag Note
abaqus

abaqus, abaquscae

 
ansys ansys, ansyspar  
comsol comsolscript  
schrodinger epik, glide, ligprep, macromodel, qikprop  
starccm starccm, starccmpar  
stata stata  
usearch usearch  
ls-dyna, mpp-dyna lsdyna  
 
Supercomputer: 
Service: