NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD generally scales well on OSC platforms and offers a variety of modelling techniques. NAMD is file-compatible with AMBER, CHARMM, and X-PLOR.

Availability and Restrictions

Versions

The following versions of NAMD are available:

Version Owens Pitzer
2.11 X  
2.12 X  
2.13b2   X
2.13 X* X*
* Current default version
*  IMPORTANT NOTE: You need to load correct compiler and MPI modules before you use NAMD. In order to find out what modules you need, use module spider namd/{version} .

You can use  module spider namd  to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access

NAMD is available to all OSC users for academic purpose.

Publisher/Vendor/Repository and License Type

TCBG, University of Illinois/ Open source (academic)

Usage

Set-up

To load the NAMD software on the system, use the following command: module load namd/"version"  where "version" is the version of NAMD you require. The following will load the default or latest version of NAMD:  module load namd .

Using NAMD

NAMD is rarely executed interactively because preparation for simulations is typically performed with extraneous tools, such as, VMD.

Batch Usage

Sample batch scripts and input files are available here:

~srb/workshops/compchem/namd/

The simple batch script for Owens below demonstrates some important points. It requests 56 processors and 2 hours of walltime. If the job goes beyond 2 hours, the job would be terminated.

#!/bin/bash
#SBATCH --job-name apoa1 
#SBATCH --nodes=2 --ntasks-per-node=28
#SBATCH --time=2:00:00
#SBATCH --account=<project-account>

module load intel/18.0.4
module load mvapich2/2.3.6
module load namd
# SLURM_SUBMIT_DIR refers to the directory from which the job was submitted.
for FILE in *
do
    sbcast -p $FILE $TMPDIR/$FILE
done
# Use TMPDIR for best performance.
cd $TMPDIR
run_namd apoa1.namd
sgather -pr $TMPDIR $SLURM_SUBMIT_DIR/output

Or equivalently, on Pitzer:

#!/bin/bash
#SBATCH --job-name apoa1
#SBATCH --nodes=2 --ntasks-per-node=48
#SBATCH --time=2:00:00
#SBATCH --account=<project-account>

module load intel/18.0.4
module load mvapich2/2.3.6
module load namd
# SLURM_SUBMIT_DIR refers to the directory from which the job was submitted.
# the following loop assumes you have the necessary .namd, .pdb, .psf, and .xplor files
# in the directory you are submitting the job from 
for FILE in *
do
    sbcast -p $FILE $TMPDIR/$FILE
done
# Use TMPDIR for best performance.
cd $TMPDIR
run_namd apoa1.namd
sgather -pr $TMPDIR $SLURM_SUBMIT_DIR/output
NOTE: ntaks-per-node should be a maximum of 28 on Owens and maximum of 48 on Pitzer.

GPU support

We have GPU support with NAMD 2.12 for Owens clusters. These temporarily use pre-compiled binaries due to installation issues.  For more detail, please read the corresponding example script:

~srb/workshops/compchem/namd/apoa1.namd212nativecuda.owens.pbs  # for Owens

Further Reading

Supercomputer: 
Service: