Shell and initialization
Your batch script executes in a shell on a compute node. The environment is identical to what get when you connect to a login node except that you have access to all the resources requested by your job. By default, the script is executed using the same shell that you get when you log in (bash, tcsh, etc.). The appropriate “dot-files” (
.cshrc) will be executed, the same as when you log in. (For information on overriding the default shell, see the Job Scripts section.)
Execution begins in your home directory, regardless of what directory your script resides in or where you submitted the job from. You can use the
cd command to change to a different directory. The environment variable
$PBS_O_WORKDIR makes it easy to return to the directory from which you submitted the job:
There are dozens of software packages available on OSC’s systems, many of them with multiple versions. You control what software is available in your environment by loading the module for the software you need. Each module sets certain environment variables required by the software.
If you are running software that was installed by OSC, you should check the software documentation page to find out what modules to load.
Several modules are automatically loaded for you when you login or start a batch script. These default modules include
- modules required by the batch system
- the Intel compiler suite
- an MPI package compatible with the default compiler (for parallel computing)
The module command has a number of subcommands. The most useful of these are documented here. For more details, type “
Certain modules are incompatible with each other and should never be loaded at the same time. Examples are different versions of the same software or multiple installations of a library built with different compilers.
Note to those who build or install their own software: Be sure to load the same modules when you run your software that you had loaded when you built it, including the compiler module.
Each module has both a name and a software version number. When more than one version is available for the same name, one of them is designated as the default. For example, the following modules are available for the Intel compilers on Oakley: (Note: The versions shown are out of date but the concept is the same.)
- intel/12.1.0 (default)
If you specify just the name, it refers to the default version or the currently loaded version, depending on the context. If you want a different version, you must give the entire string. Examples are given below.
You can have only one compiler module loaded at a time, either intel, pgi, or gnu. The intel module is loaded initially; to change to pgi or gnu, do a
module swap (see example below).
Some software libraries have multiple installations built for use with different compilers. The module system will load the one compatible with the compiler you have loaded. If you swap compilers, all the compiler-dependent modules will also be swapped.
Special note to gnu compiler users: While the gnu compilers are always in your path, you should load the gnu compiler module to ensure you are linking to the correct library versions.
To list the modules you have loaded:
To see all modules that are compatible with your currently loaded modules:
To see compatible modules whose names start with fftw:
module avail fftw
To see all possible modules:
To see all possible modules whose names start with fftw:
module spider fftw
To load the fftw3 module that is compatible with your current compiler:
module load fftw3
To unload the fftw3 module:
module unload fftw3
To load the default version of the abaqus module (not compiler-dependent):
module load abaqus
To load a different version of the abaqus module:
module load abaqus/6.8-4
To unload whatever abaqus module you have loaded:
module unload abaqus
To swap the intel compilers for the pgi compilers (unloads intel, loads pgi):
module swap intel pgi
To swap the default version of the intel compilers for a different version:
module swap intel intel/18.104.22.1689
To display help information for the mkl module:
module help mkl
To display the commands run by the mkl module:
module show mkl
To use a locally installed module, first import the module directory:
module use [/path/to/modulefiles]
And then load the module:
module load localmodule
PBS environment variables
Your batch execution environment has all the environment variables that your login environment has plus several that are set by the batch system. This section gives examples for using some of them. For more information see “
Several directories may be useful in your job.
The absolute path of the directory your job was submitted from is
$PBS_O_WORKDIR. Recall that your job always starts in your home directory. To get back to your submission directory:
Each job has a temporary directory,
$TMPDIR, on the local disk of each node assigned to it. Access to this directory is much faster than access to your home or project directory. The files in this directory are not visible from all the nodes in a parallel job; each node has its own directory. The batch system creates this directory when your job starts and deletes it when your job ends. To copy file
$TMPDIR on all your job’s first node:
cp input.dat $TMPDIR
To copy file
$TMPDIR on all your job’s nodes:
pbsdcp input.dat $TMPDIR
Each job has a temporary directory,
$PFSDIR, on the parallel file system. This is a single directory shared by all the nodes a job is running on. Access is faster than access to your home or project directory but not as fast as
$TMPDIR. The batch system creates this directory when your job starts and deletes it when your job ends. To copy the file
output.dat from this directory to the directory you submitted your job from:
cp $PFSDIR/output.dat $PBS_O_WORKDIR
$HOME environment variable refers to your home directory. It is not set by the batch system but is useful in some job scripts. It is better to use
$HOME than to hardcode the path to your home directory. To access a file in your home directory:
Several environment variables provide information about your job that may be useful.
A list of the nodes and cores assigned to your job is in the file
$PBS_NODEFILE. To display this file:
For GPU jobs, a list of the GPUs assigned to your job is in the file
$PBS_GPUFILE. To display this file:
If you use a job array, each job in the array gets its identifier within the array in the variable
$PBS_ARRAYID. To pass a file name parameterized by the array ID into your application:
To display the numeric job Identifier assigned by the batch system:
To display the job name:
Use fast storage
If your job does a lot of file-based input and output, your choice of file system can make a huge difference in the performance of the job.
Shared file systems
Your home and project directories are located on shared file systems, providing long-term storage that is accessible from all OSC systems. Shared file systems are relatively slow. They cannot handle heavy loads such as those generated by large parallel jobs or many simultaneous serial jobs. You should minimize the I/O your jobs do on the shared file systems. It is usually best to copy your input data to fast temporary storage, run your program there, and copy your results back to your home or project directory.
Batch-managed directories are temporary directories that exist only for the duration of a job. They exist on two types of storage: disks local to the compute nodes and a parallel file system.
A big advantage of batch-managed directories is that the batch system deletes them when a job ends, preventing clutter on the disk.
A disadvantage of batch-managed directories is that you can’t access them after your job ends. Be sure to include commands in your script to copy any files you need to long-term storage. To avoid losing your files if your job ends abnormally, for example by hitting its walltime limit, include a
trap command in your script (Note:
trap command does not work in /csh and /tcsh). The following example creates a subdirectory in
$PBS_O_WORKDIR and copies everything from
$TMPDIR into it in case of abnormal termination.
trap "cd $PBS_O_WORKDIR;mkdir $PBS_JOBID;cp -R $TMPDIR/* $PBS_JOBID" TERM
If a node your job is running on crashes, the trap command may not be executed. It may be possible to recover your batch-managed directories in this case. Contact OSC Help for assistance.
Local disk space
The fastest storage is on a disk local to the node your job is running on, accessed through the environment variable
$TMPDIR. The main drawback to local storage is that each node of a parallel job has its own directory and cannot access the files on other nodes. See also “Considerations for Parallel Jobs”.
Local disk space should be used only through the batch-managed directory created for your job. Please do not use
/tmp directly because your files won’t be cleaned up properly.
Parallel file system
The parallel file system is faster than the shared file systems for large-scale I/O and can handle a much higher load. You should use it when your files must be accessible by all the nodes in your job and also when your files are too large for the local disk.
The parallel file system is efficient for reading and writing data in large blocks. It should not be used for I/O involving many small accesses.
The parallel file system is typically used through the batch-managed directory created for your job. The path for this directory is in the environment variable
You may also create a directory for yourself in
/fs/lustre and use it the way you would use any other directory. You should name the directory with either your user name or your project ID. This directory will not be backed up; files are subject to deletion after some number of months (see policies for details).
Note: You should not copy your executable files to $PFSDIR. They should be run from your home or project directories or from $TMPDIR.