GROMACS

GROMACS is a versatile package of molecular dynamics simulation programs. It is primarily designed for biochemical molecules, but it has also been used on non-biological systems.  GROMACS generally scales well on OSC platforms. Starting with version 4.6 GROMACS includes GPU acceleration.

Availability and Restrictions

Versions

GROMACS is available on Oakley and Owens Clusters. Both single and double precision executables are installed. The versions currently available at OSC are the following:

Version Owens pitzer notes
5.1.2 SPC   Default version on Owens prior to 09/04/2018
2016.4 SPC    
2018.2 SPC* SPC*  
* Current default version; S = serial single node executables; P = parallel multinode; C = CUDA (GPU)

You can use module spider gromacs  to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access

GROMACS is available to all OSC users without restriction.

Publisher/Vendor/Repository and License Type

http://www.gromacs.org/ Open source

Usage 

Usage on Owens

Set-up

To load the module for the default version of GROMACS, which initializes your environment for the GROMACS application, use module load gromacs. To select a particular software version, use module load gromacs/version. For example, use module load gromacs/5.1.2 to load GROMACS version 5.1.2; and use module help gromacs/5.1.2 to view details, such as, compiler prerequisites, additional modules required for specific executables, the suffixes of executables, etc.; some versions require specific prerequisite modules, and such details may be obtained with the command   module spider gromacs/version.

Using GROMACS

To execute a serial GROMACS versions 5 program interactively, simply run it on the command line, e.g.:

gmx pdb2gmx

Parallel multinode GROMACS versions 5 programs should be run in a batch environment with mpiexec, e.g.:

mpiexec gmx_mpi_d mdrun

Note that '_mpi' indicates a parallel executable and '_d' indicates a program built with double precision ('_gpu' denotes a GPU executable built with CUDA).  See the module help output for specific versions for more details on executable naming conventions.

Batch Usage

When you log into Owens you are actually connected to a login node. To  access the compute nodes, you must submit a job to the batch system for execution. Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

Interactive Batch Session
For an interactive batch session on Owens, one can run the following command:
qsub -I -l nodes=1:ppn=28 -l walltime=1:00:00
which gives you 28 cores (-l nodes=1:ppn=28) with 1 hour (-l walltime=1:00:00). You may adjust the numbers per your need.
Non-interactive Batch Job (Parallel Run)

batch script can be created and submitted for a serial, cuda (GPU), or parallel run. You can create the batch script using any text editor in a working directory on the system of your choice. Sample batch scripts and input files for all types of hardware resources are available here:

~srb/workshops/compchem/gromacs/

This simple batch script demonstrates some important points:

# GROMACS Tutorial for Solvation Study of Spider Toxin Peptide
# see fwspider_tutor.pdf
#PBS -N fwsinvacuo.owens
#PBS -l nodes=2:ppn=28
module load gromacs
# PBS_O_WORKDIR refers to the directory from which the job was submitted.
cd $PBS_O_WORKDIR
pbsdcp -p 1OMB.pdb em.mdp $TMPDIR
# Use TMPDIR for best performance.
cd $TMPDIR
pdb2gmx -ignh -ff gromos43a1 -f 1OMB.pdb -o fws.gro -p fws.top -water none
editconf -f fws.gro -d 0.7
editconf -f out.gro -o fws_ctr.gro -center 2.0715 1.6745 1.914
grompp -f em.mdp -c fws_ctr.gro -p fws.top -o fws_em.tpr
mpiexec gmx_mpi mdrun -s fws_em.tpr -o fws_em.trr -c fws_ctr.gro -g em.log -e em.edr
cat em.log
cp -p * $PBS_O_WORKDIR/

Usage on Pitzer

Set-up

To load the module for the default version of GROMACS, which initializes your environment for the GROMACS application, use module load gromacs.

Using GROMACS

To execute a serial GROMACS versions 5 program interactively, simply run it on the command line, e.g.:

gmx pdb2gmx

Parallel multinode GROMACS versions 5 programs should be run in a batch environment with mpiexec, e.g.:

mpiexec gmx_mpi_d mdrun

Note that '_mpi' indicates a parallel executable and '_d' indicates a program built with double precision ('_gpu' denotes a GPU executable built with CUDA).  See the module help output for specific versions for more details on executable naming conventions.

Batch Usage

When you log into Pitzer you are actually connected to a login node. To  access the compute nodes, you must submit a job to the batch system for execution. Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

Interactive Batch Session
For an interactive batch session on Pitzer, one can run the following command:
qsub -I -l nodes=1:ppn=40 -l walltime=1:00:00
which gives you 28 cores (-l nodes=1:ppn=40) with 1 hour (-l walltime=1:00:00). You may adjust the numbers per your need.
Non-interactive Batch Job (Parallel Run)

batch script can be created and submitted for a serial, cuda (GPU), or parallel run. You can create the batch script using any text editor in a working directory on the system of your choice. Sample batch scripts and input files for all types of hardware resources are available here:

~srb/workshops/compchem/gromacs/

This simple batch script demonstrates some important points:

# GROMACS Tutorial for Solvation Study of Spider Toxin Peptide
# see fwspider_tutor.pdf
#PBS -N fwsinvacuo.pitzer
#PBS -l nodes=2:ppn=40
module load gromacs
# PBS_O_WORKDIR refers to the directory from which the job was submitted.
cd $PBS_O_WORKDIR
pbsdcp -p 1OMB.pdb em.mdp $TMPDIR
# Use TMPDIR for best performance.
cd $TMPDIR
pdb2gmx -ignh -ff gromos43a1 -f 1OMB.pdb -o fws.gro -p fws.top -water none
editconf -f fws.gro -d 0.7
editconf -f out.gro -o fws_ctr.gro -center 2.0715 1.6745 1.914
grompp -f em.mdp -c fws_ctr.gro -p fws.top -o fws_em.tpr
mpiexec gmx_mpi mdrun -ntomp 1 -s fws_em.tpr -o fws_em.trr -c fws_ctr.gro -g em.log -e em.edr
cat em.log
cp -p * $PBS_O_WORKDIR/

Further Reading

Supercomputer: 
Service: