On September 22nd, OSC switched to Slurm for job scheduling and resource management on the Pitzer Cluster, along with the deployment of the new Pitzer hardware. We are in the process of updating the example job scripts for each software. If a Slurm example is not yet available, please consult our general Slurm information page or contact OSC help.

Gaussian

Gaussian is the most popular general purpose electronic structure program. Recent versions can perform density functional theory, Hartree-Fock, Möller-Plesset, coupled-cluster, and configuration interaction calculations among others. Geometry optimizations, vibrational frequencies, magnetic properties, and solution modeling are available. It performs well as black-box software on closed-shell ground state systems. 

Availability and Restrictions

Versions

Gaussian is available on the Pitzer and Owen Clusters. These versions are currently available at OSC (S means single node serial/parallel and C means CUDA, i.e., GPU enabled):

Version Owens Pitzer
g09e01 S  

g16a03

S

S
g16b01 SC S
g16c01 SC* SC*
* Current default version

You can use module spider gaussian to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access for Academic Users

Use of Gaussian for academic purposes requires validation. In order to obtain validation, please contact OSC Help for further instruction.

Publisher/Vendor/Repository and License Type

Gaussian, commercial

Usage

Usage on Owens

Set-up on Owens

To load the default version of the Gaussian module which initalizes your environment for Gaussian, use module load gaussian. To select a particular software version, use module load gaussian/version. For example, use module load gaussian/g09e01 to load Gaussian version g09e01 on Owens. 

Using Gaussian

To execute Gaussian, simply run the Gaussian binary (g16 or g09) with the input file on the command line:

g16 < input.com

When the input file is redirected as above ( < ), the output will be standard output; in this form the output can be seen with viewers or editors when the job is running in a batch queue because the batch output file, which captures standard output, is available in the directory from which the job was submitted.  Alternatively, Gaussian can be invoked without file redirection:

g16 input.com

in which case the output file will be named 'input.log' and its path will be the working directory when the command started; in this form outputs may not be available when the job is running in a batch queue, for example if the working directory was .

Batch Usage on Owens

When you log into owens.osc.edu you are logged into a login node. To gain access to the mutiple processors in the computing environment, you must submit your computations to the batch system for execution. Batch jobs can request mutiple processors and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

Interactive Batch Session
For an interactive batch session on Owens, one can run the following command:
qsub -I -l nodes=1:ppn=28 -l walltime=1:00:00
which gives you 28 cores (-l nodes=1:ppn=28) with 1 hour (-l walltime=1:00:00). You may adjust the numbers per your need.
Non-interactive Batch Job (Serial Run)

batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Sample batch scripts and Gaussian input files are available here:

/users/appl/srb/workshops/compchem/gaussian/

This simple batch script demonstrates the important points:

#PBS -N GaussianJob
#PBS -l nodes=1:ppn=28
# PBS_O_WORKDIR refers to the directory from which the job was submitted.
cd $PBS_O_WORKDIR
cp input.com $TMPDIR
# Use TMPDIR for best performance.
cd $TMPDIR
module load gaussian
g16 < input.com
cp -p *.chk $PBS_O_WORKDIR
Note: OSC does not have a functional distributed parallel version (LINDA) of Gaussian. Parallelism of Gaussian at OSC is only via shared memory. Consequently, do not request more than one node for Gaussian jobs on OSC's clusters.

Usage on Pitzer

Set-up on Pitzer

To load the default version of the Gaussian module which initalizes your environment for Gaussian, use module load gaussian

Using Gaussian

To execute Gaussian, simply run the Gaussian binary (g16 or g09) with the input file on the command line:

g16 < input.com

When the input file is redirected as above ( < ), the output will be standard output; in this form the output can be seen with viewers or editors when the job is running in a batch queue because the batch output file, which captures standard output, is available in the directory from which the job was submitted.  Alternatively, Gaussian can be invoked without file redirection:

g16 input.com

in which case the output file will be named 'input.log' and its path will be the working directory when the command started; in this form outputs may not be available when the job is running in a batch queue, for example if the working directory was .

Batch Usage on Pitzer

When you log into pitzer.osc.edu you are logged into a login node. To gain access to the mutiple processors in the computing environment, you must submit your computations to the batch system for execution. Batch jobs can request mutiple processors and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

Interactive Batch Session
For an interactive batch session on Pitzer, one can run the following command:
sinteractive -N 1 -n 40 -t 1:00:00
which gives you 40 cores (-n 40) with 1 hour (-t 1:00:00). You may adjust the numbers per your need.
Non-interactive Batch Job (Serial Run)

batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Sample batch scripts and Gaussian input files are available here:

/users/appl/srb/workshops/compchem/gaussian/

This simple batch script demonstrates the important points:

#SBATCH --job-name=GaussianJob
#SBATCH --nodes=1 --ntasks-per-node=40
# SLURM_SUBMIT_DIR refers to the directory from which the job was submitted.
cd $SLURM_SUBMIT_DIR
cp input.com $TMPDIR
# Use TMPDIR for best performance.
cd $TMPDIR
module load gaussian
g16 input.com
cp -p input.log *.chk $SLURM_SUBMIT_DIR

Running Gaussian jobs with GPU

Gaussian jobs can utilize the P100 GPUS of Owens.  GPUs are not helpful for small jobs but are effective for larger molecules when doing DFT energies, gradients, and frequencies (for both ground and excited states). They are also not used effectively by post-SCF calculations such as MP2 or CCSD. For more

The above example will utilize CPUs indexed from 0 to 19th, but 0th CPU is associated with 0th GPU.

A sample batch script for GPU on Owens is as follows:

#PBS -S /bin/tcsh
#PBS -N methane 
#PBS -o methane.log
#PBS -l nodes=1:ppn=28:gpus=1:default
set echo
cd $TMPDIR
set INPUT=methane.com
# PBS_O_WORKDIR refers to the directory from which the job was submitted.
cp $PBS_O_WORKDIR/$INPUT .
module load gaussian/g16b01
g16 < ./$INPUT
ls -al
cp -p *.chk $PBS_O_WORKDIR

 

A sample input file for GPU on Owens is as follows:

%nproc=28
%mem=8gb
%CPU=0-27
%GPUCPU=0=0
%chk=methane.chk
#b3lyp/6-31G(d) opt
methane B3LYP/6-31G(d) opt freq
0,1
C        0.000000        0.000000        0.000000
H        0.000000        0.000000        1.089000
H        1.026719        0.000000       -0.363000
H       -0.513360       -0.889165       -0.363000
H       -0.513360        0.889165       -0.363000

A sample batch script for GPU on Pitzer is as follows:

#!/bin/tcsh
#SBATCH --job-name=methane
#SBATCH --output=methane.log
#SBATCH --nodes=1 --ntasks-per-node=48 --gpus=1 --gpu_cmode=shared
set echo
cd $TMPDIR
set INPUT=methane.com
# SLURM_SUBMIT_DIR refers to the directory from which the job was submitted.
cp $SLURM_SUBMIT_DIR/$INPUT .
module load gaussian/g16b01
g16 < ./$INPUT
ls -al
cp -p *.chk $SLURM_SUBMIT_DIR

 

A sample input file for GPU on Pitzer is as follows:

%nproc=48
%mem=8gb
%CPU=0-47
%GPUCPU=0=0
%chk=methane.chk
#b3lyp/6-31G(d) opt
methane B3LYP/6-31G(d) opt freq
0,1
C        0.000000        0.000000        0.000000
H        0.000000        0.000000        1.089000
H        1.026719        0.000000       -0.363000
H       -0.513360       -0.889165       -0.363000
H       -0.513360        0.889165       -0.363000

 

Further Reading

Supercomputer: 
Service: