Using Software on Pitzer RHEL 7

While OSC has upgraded the Pitzer cluster to RHEL 9, you may encounter difficulties when migrating jobs from RHEL 7 to the new system. To help you continue your research, we provide a containerized RHEL 7 environment on Pitzer RHEL 9. This container replicates the original RHEL 7 system and software environment used on Pitzer.

Note: This containerized RHEL7 environment is a temporary solution and may be terminated at any time without prior notice. 

Reusing Job Scripts

Assume you have an existing job script that previously worked on Pitzer RHEL 7 (e.g., my_rhel7_job.sh):

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=4

my_rhel7_program

To run this script within the RHEL 7 container on Pitzer RHEL 9, prepare a new job script that uses the container wrapper, such as my_rhel7_job_in_container.sh:

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=4

/apps/share/tools/rhel7_wrapper.sh ./my_rhel7_job.sh

Then submit the job with:

sbatch my_rhel7_job_in_container.sh
Note: If you need to compile software with intel compilers older than the version 19, which require a license, please contact oschelp@osc.edu. This includes the default intel compiler module.

Running a MPI program

We have disabled Slurm support inside the container due to certain technical issues. Therefore, any Slurm-specific commands in your job script (such as srun or sbcast) will not work. You should replace them with alternatives such as mpirun/mpiexec and cp, respectively.

Please note that MVAPICH2 is built only with Slurm support, so there is no native mpirun/mpiexec command available for it inside the container. Instead, you can use Intel-MPI or OpenMPI, which provide their own mpiexec commands.

Below are example replacements for srun:

# OpenMPI 
mpiexec --bind-to none <your_program>

# Intel-MPI
mpiexec -launcher ssh <your_program>
Note: Multi-node MPI jobs are not supported in this containerized environment.

GPU support and extra bind path

If your job requires GPUs, append the --nv option:

/apps/share/tools/rhel7_wrapper.sh --nv ./my_rhel7_job.sh

By default, you can access your own home directory. If you need access to another user's home directory that has been shared with you, use the --bind option:

/apps/share/tools/rhel7_wrapper.sh --bind /someone/else/home ./my_rhel7_job.sh

Working Interactively in the RHEL 7 Environment

In some cases, you may need to recompile your program within the RHEL 7 environment. You can either use the job script mentioned above or launch an interactive container shell session after starting an interactive job:

/apps/share/tools/rhel7_shell.sh

Once inside the container shell, initialize the RHEL 7 environment by running:

source /etc/profile.d/lmod.sh
source /etc/profile.d/z00_StdEnv.sh
module rm xalt

Or, as a shortcut:

. /apps/share/tools/init_rhel7.sh

You can verify that the RHEL 7 environment is properly set up by running:

module list

The options --nv and --bind are also available in the rhel7_shell.sh script.

Supercomputer: 
Fields of Science: