LAMMPS

The Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a classical molecular dynamics code designed for high-performance simulation of large atomistic systems.  LAMMPS generally scales well on OSC platforms, provides a variety of modeling techniques, and offers GPU accelerated computation.

Availability and Restrictions

LAMMPS is available on Ruby, Oakley, and Glenn Clusters. The versions currently available at OSC are:

Version Glenn Oakley RUBY notes
Oct06 X      
Jul07 X      
Jan08 X      
Apr08 X      
Sep09 X*      
Mar10 X      
Jun10 X      
Aug10 LAMMPS X      
Oct10 X      
Mar11 LAMMPS X      
Jun11 X      
Jan12 X      
Feb12   X   Default version on Oakley prior to 09/15/2015
May12   X    
Feb14   X    
5Sep14   X* X*  
7Dec15        X X X  
*: Current default version

You can use module avail lammps to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access

LAMMPS is available to all OSC users without restriction.

Usage

Usage on Glenn

Set-up on Glenn

To load the default version of LAMMPS module and set up your environment, use  module load lammps . To select a particular software version, use   module load lammps-version . For example, use  module load lammps-2Jun10   to load LAMMPS version Jun10 on Glenn. 

Using LAMMPS

Once a module is loaded, LAMMPS can be run with the following command:
lammps < input.file

To see information on the packages and executables for a particular installation, run the module help command, for example on Glenn:

module help lammps

Batch Usage on Glenn

When you log into glenn.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time, up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.

Interactive Batch Session

For an interactive batch session one can run the following command:

qsub -I -l nodes=1:ppn=8 -l walltime=00:20:00 
which requests one whole node with 8 cores ( -l nodes=1:ppn=8 ), for a walltime of 20 minutes ( -l walltime=00:20:00 ). You may adjust the numbers per your need.
Non-interactive Batch Job (Parallel Run)

A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Sample batch scripts and LAMMPS input files are available here:

/nfs/10/srb/workshops/compchem/lammps/

Below is a sample batch script for the Glenn Cluster. It asks for 16 processors and 10 hours of walltime. If the job goes beyond 10 hours, the job would be terminated.

#PBS -N chain  
#PBS -l nodes=2:ppn=8  
#PBS -l walltime=10:00:00  
#PBS -S /bin/bash  
#PBS -j oe  
module load lammps  
cd $PBS_O_WORKDIR  
pbsdcp chain.in $TMPDIR  
cd $TMPDIR  
lammps < chain.in  
pbsdcp -g '*' $PBS_O_WORKDIR
Non-interactive Batch Job (GPU usage)

LAMMPS can run on GPU's on Glenn. This example shows how to load and run such a GPU-enabled version to speed up certain pair_style's. It uses one node and two GPU's for the computation. See the sample PBS script for details:

#PBS -N lammpsTest
#PBS -l nodes=1:ppn=8,feature=gpu
#PBS -l walltime=00:10:00
#PBS -S /bin/bash
#PBS -j oe
module switch mvapich2-1.4-gnu
module load lammps-3.0
module load fftw2-2.1.5-double-mvapich2-1.4-gnu
module load lammps-25Mar11lammps
cd $PBS_O_WORKDIR
cp lj-gpu.in $TMPDIR
cd $TMPDIR
mpiexec -np 2 lmp_osc < lj-gpu.in > out
cp $TMPDIR/* $PBS_O_WORKDIR

Here is a sample input with the necessary modifications to use a GPU pair_style, and it uses both GPU's. Please refer to the documentation for more pair_style's that can be used in such simulations.

newton off
units lj
atom_style atomic
lattice fcc 0.8442
region box block 0 20 0 20 0 20
create_box 1 box
create_atoms 1 box
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut/gpu 2.5
pair_coeff 1 1 1.0 1.0 2.5
neighbor 0.3 bin
neigh_modify delay 0 every 20 check no
fix 0 all gpu force/neigh 0 1 1.0
fix 1 all nve
timestep 0.003
thermo 100
run 100
Please note that the you cannot run more than two threads per node. More than two threads will cause the application to hang, since there are only two GPU's per node. The LAMMPS input and the pbs script must match up with the number of GPU's being used.

Usage on Oakley

Set-up on Oakley

To load the default version of LAMMPS module and set up your environment, use  module load lammps . To select a particular software version, use   module load lammps/version . For example, use  module load lammps/12Feb12   to load LAMMPS version Feb12 on Oakley. 

Using LAMMPS

Once a module is loaded, LAMMPS can be run with the following command:
lammps < input.file

To see information on the packages and executables for a particular installation, run the module help command, for example on Oakley:

module help lammps

Batch Usage on Oakley

When you log into oakley.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.  Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.

Interactive Batch Session

For an interactive batch session one can run the following command:

qsub -I -l nodes=1:ppn=1 -l walltime=00:20:00

which requests one core ( -l nodes=1:ppn=1 ), for a walltime of 20 minutes ( -l walltime=00:20:00 ). You may adjust the numbers per your need.

Non-interactive Batch Job (Parallel Run)

A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Sample batch scripts and LAMMPS input files are available here:

/nfs/10/srb/workshops/compchem/lammps/

Below is a sample batch script for the Oakley Cluster. It asks for 24 processors and 10 hours of walltime. If the job goes beyond 10 hours, the job would be terminated.

#PBS -N chain  
#PBS -l nodes=2:ppn=12  
#PBS -l walltime=10:00:00  
#PBS -S /bin/bash  
#PBS -j oe  
module load lammps  
cd $PBS_O_WORKDIR  
pbsdcp chain.in $TMPDIR  
cd $TMPDIR  
lammps < chain.in  
pbsdcp -g '*' $PBS_O_WORKDIR

Usage on Ruby

Set-up on Ruby

To load the default version of LAMMPS module and set up your environment, use  module load lammps . To select a particular software version, use   module load lammps/version . For example, use  module load lammps/5Sep14   to load LAMMPS version 5Sep14 on Ruby. 

Using LAMMPS

Once a module is loaded, LAMMPS can be run with the following command:
lammps < input.file

To see information on the packages and executables for a particular installation, run the module help command, for example on Ruby:

module help lammps

Batch Usage on Ruby

When you log into ruby.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

Interactive Batch Session

For an interactive batch session one can run the following command:

qsub -I -l nodes=1:ppn=20:gpus=1 -l walltime=00:20:00 

which requests one whole node with 20 cores ( -l nodes=1:ppn=20 ), for a walltime of 20 minutes ( -l walltime=00:20:00 ). You may adjust the numbers per your need.

Non-interactive Batch Job (Parallel Run)

A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Sample batch scripts and LAMMPS input files are available here:

/nfs/10/srb/workshops/compchem/lammps/

Below is a sample batch script for the Ruby Cluster. It asks for 40 processors and 10 hours of walltime. If the job goes beyond 10 hours, the job would be terminated.

#PBS -N chain  
#PBS -l nodes=2:ppn=20  
#PBS -l walltime=10:00:00  
#PBS -S /bin/bash  
#PBS -j oe  
module load lammps  
cd $PBS_O_WORKDIR  
pbsdcp chain.in $TMPDIR  
cd $TMPDIR  
lammps < chain.in  
pbsdcp -g '*' $PBS_O_WORKDIR

Further Reading

>

Supercomputer: 
Service: