On September 22nd, OSC switched to Slurm for job scheduling and resource management on the Pitzer Cluster, along with the deployment of the new Pitzer hardware. We are in the process of updating the example job scripts for each software. If a Slurm example is not yet available, please consult our general Slurm information page or contact OSC help.

LAMMPS

The Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a classical molecular dynamics code designed for high-performance simulation of large atomistic systems.  LAMMPS generally scales well on OSC platforms, provides a variety of modeling techniques, and offers GPU accelerated computation.

Availability and Restrictions

Versions

LAMMPS is available on all clusters. The following versions are currently installed at OSC:

Version Ruby Owens Pitzer
5Sep14 P    
7Dec15 PC    
14May16 PC* P  
31Mar17   PC  
16Mar18 PC PC  
22Aug18   PC PC
5Jun19   PC PC
3Mar20   PC* PC*
* Current default version; S = serial executables; P = parallel; C = CUDA
*  IMPORTANT NOTE: You must load the correct compiler and MPI modules before you can load LAMMPS. To determine which modules you need, use module spider lammps/{version}.  Some LAMMPS versions are available with multiple compiler versions and MPI versions; in general, we recommend using the latest versions. (In particular, mvapich2/2.3.2 is recommended over 2.3.1 and 2.3; see the known issue.

You can use module spider lammps  to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access

LAMMPS is available to all OSC users. If you have any questions, please contact OSC Help.

Publisher/Vendor/Repository and License Type

Sandia National Lab., Open source

Usage

Usage on Ruby

Set-up

To load the default version of LAMMPS module and set up your environment, use  module load lammps. To select a particular software version, use module load lammps/version. For example, use  module load lammps/5Sep14 to load LAMMPS version 5Sep14. 

Using LAMMPS

Once a module is loaded, LAMMPS can be run with the following command:
lammps < input.file

To see information on the packages and executables for a particular installation, run the module help command, for example:

module help lammps

Batch Usage

When you log into ruby.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

Interactive Batch Session

For an interactive batch session one can run the following command:

qsub -I -l nodes=1:ppn=20:gpus=1 -l walltime=00:20:00 

which requests one whole node with 20 cores ( -l nodes=1:ppn=20 ), for a walltime of 20 minutes ( -l walltime=00:20:00 ). You may adjust the numbers per your need.

Non-interactive Batch Job (Parallel Run)

A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Sample batch scripts and LAMMPS input files are available here:

~srb/workshops/compchem/lammps/

Below is a sample batch script. It asks for 40 processors and 10 hours of walltime. If the job goes beyond 10 hours, the job would be terminated.

#PBS -N chain  
#PBS -l nodes=2:ppn=20  
#PBS -l walltime=10:00:00  
#PBS -S /bin/bash  
#PBS -j oe  
module load lammps  
cd $PBS_O_WORKDIR  
pbsdcp chain.in $TMPDIR  
cd $TMPDIR  
lammps < chain.in  
pbsdcp -g '*' $PBS_O_WORKDIR

Usage on Owens

Set-up

To load the default version of LAMMPS module and set up your environment, use  module load lammps . To select a particular software version, use module load lammps/version . For example, use  module load lammps/14May16  to load LAMMPS version 14May16. 

Using LAMMPS

Once a module is loaded, LAMMPS can be run with the following command:
lammps < input.file

To see information on the packages and executables for a particular installation, run the module help command, for example:

module help lammps

Batch Usage

By connecting to owens.osc.edu you are logged into one of the login nodes which has computing resource limits. To gain access to the manifold resources on the cluster, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

Interactive Batch Session

For an interactive batch session one can run the following command:

qsub -I -l nodes=1:ppn=28:gpus=1 -l walltime=00:20:00 

which requests one whole node with 28 cores ( -l nodes=1:ppn=28 ), for a walltime of 20 minutes ( -l walltime=00:20:00 ). You may adjust the numbers per your need.

Non-interactive Batch Job (Parallel Run)

A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Sample batch scripts and LAMMPS input files are available here:

~srb/workshops/compchem/lammps/

Below is a sample batch script. It asks for 56 processors and 10 hours of walltime. If the job goes beyond 10 hours, the job would be terminated.

#PBS -N chain  
#PBS -l nodes=2:ppn=28  
#PBS -l walltime=10:00:00  
#PBS -S /bin/bash  
#PBS -j oe  
module load lammps  
cd $PBS_O_WORKDIR  
pbsdcp chain.in $TMPDIR  
cd $TMPDIR  
lammps < chain.in  
pbsdcp -g '*' $PBS_O_WORKDIR

Usage on Pitzer

Set-up

To load the default version of LAMMPS module and set up your environment, use  module load lammps

Using LAMMPS

Once a module is loaded, LAMMPS can be run with the following command:
lammps < input.file

To see information on the packages and executables for a particular installation, run the module help command, for example:

module help lammps

Batch Usage

To access a cluster's main computational resources, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

Interactive Batch Session

For an interactive batch session one can run the following command:

qsub -I -l nodes=1:ppn=28:gpus=1 -l walltime=00:20:00 

which requests one whole node with 28 cores ( -l nodes=1:ppn=28 ), for a walltime of 20 minutes ( -l walltime=00:20:00 ). You may adjust the numbers per your need.

Non-interactive Batch Job (Parallel Run)

batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Sample batch scripts and LAMMPS input files are available here:

~srb/workshops/compchem/lammps/

Below is a sample batch script. It asks for 56 processors and 10 hours of walltime. If the job goes beyond 10 hours, the job would be terminated.

#PBS -N chain  
#PBS -l nodes=2:ppn=28  
#PBS -l walltime=10:00:00  
#PBS -S /bin/bash  
#PBS -j oe  
module load lammps  
cd $PBS_O_WORKDIR  
pbsdcp chain.in $TMPDIR  
cd $TMPDIR  
lammps < chain.in  
pbsdcp -g '*' $PBS_O_WORKDIR

Further Reading

Supercomputer: 
Service: