VASP

The Vienna Ab initio Simulation Package, VASP, is a suite for quantum-mechanical molecular dynamics (MD) simulations and electronic structure calculations.

Availability and Restrictions

Access

Due to licensing considerations, OSC does not provide general access to this software.

However, we are available to assist with the configuration of individual research-group installations on all our clusters. See the VASP FAQ page for information regarding licensing.

Usage

Using VASP

See the VASP documentation page for tutorial and workshop materials.

Building and Running VASP

If you have a VASP license you may build and run VASP on any OSC cluster. The instructions given here are for VASP 5.4.1; newer versions should be similar.

Most VASP users at OSC run VASP with MPI and without multithreading. If you need assistance with a different configuration, please contact oschelp@osc.edu.

You can build and run VASP using either IntelMPI or MVAPICH2. Performance is similar for the two MPI families. Instructions are given for both. The IntelMPI build is simpler and more standard. MVAPICH2 is the default MPI installation at OSC; however, VASP had failures with some prior versions, so building with MVAPICH2 2.3.2 or newer is recommended.

Build instructions assume that you have already unpacked the VASP distribution and patched it if necessary and are working in the vasp directory. It also assumes that you have the default module environment loaded at the start.

Building with IntelMPI

1. Copy arch/makefile.include.linux_intel and rename it makefile.include.

2. Edit makefile.include to replace the two lines

OBJECTS = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o \
$(MKLROOT)/interfaces/fftw3xf/libfftw3xf_intel.a

with one line

OBJECTS = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o

3. Make sure the FCL line is

FCL = mpiifort -mkl=sequential

4. Load modules and build the code (using the latest IntelMPI may yield the best performance; in which case the modules are intel/19.0.5 and intelmpi/2019.3 as of October 2019)

module load intelmpi
make

5. Add the modules used for the build, e.g., module load intelmpi, to your job script.

Building with MVAPICH2

1. Copy arch/makefile.include.linux_intel and rename it makefile.include.

2. Edit makefile.include to replace mpiifort with mpif90

FC         = mpif90
FCL        = mpif90 -mkl=sequential

3. Replace the BLACS, SCALAPACK, OBJECTS, INCS and LLIBS lines with

BLACS      =
SCALAPACK  = $(SCALAPACK_LIBS)

OBJECTS    = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o
INCS       = $(FFTW3_FFLAGS)

LLIBS      = $(SCALAPACK) $(FFTW3_LIBS_MPI) $(LAPACK) $(BLAS)

4. Load modules and build the code (using the latest MVAPICH2 is recommended; in which case the mvapich2/2.3.2 module must also be loaded as of October 2019)

module load scalapack
module load fftw3
make

5. Add the modules used for the build, e.g., module load fftw3, to your job script.

Building for GPUs

The "GPU Stuff" section in arch/makefile.include.linux_intel_cuda is generic.  It can be updated for OSC clusters using the environment variables defined by a cuda module.  The OSC_CUDA_ARCH environment variables defined by cuda modules on all clusters show the specific CUDA compute capabilities.  Below we have combined them as of October 2019 so that the resulting executable will run on any OSC cluster.  In addition to the instructions above, here are the specific CUDA changes and the commands for building a gpu executable.

Edits:

CUDA_ROOT         = $(CUDA_HOME)
GENCODE_ARCH      = -gencode=arch=compute_35,code=\"sm_35,compute_35\" \
                    -gencode=arch=compute_60,code=\"sm_60,compute_60\" \
                    -gencode=arch=compute_70,code=\"sm_70,compute_70\"

Commands:

module load cuda
make gpu

See this VASP Manual page and this NVIDIA page for reference.

Running VASP generally

Be sure to load the appropriate modules in your job script based on your build configuration, as indicated above. If you have built with -mkl=sequential you should be able to run VASP as follows:

mpiexec path_to_vasp/vasp_std

If you have a problem with too many threads you may need to add this line (or equivalent) near the top of your script:

export OMP_NUM_THREADS=1

Running VASP with GPUs

See this VASP Manual page and this NVIDIA page for feature restrictions, input requirements, and performance tuning examples.  To acheive maximum performance, benchmarking of your particular calculation is essential.

If you encounter a CUDA error running a GPU enabled executable, such as:

CUDA Error in cuda_mem.cu, line 44: all CUDA-capable devices are busy or unavailableFailed to register pinned memory!

then you may need to use the default compute mode which can be done by adding this line (or equivalent) near the top of your script, e.g., for Owens:

#PBS -l nodes=1:ppn=28:gpus=1:default

Further Reading

See Also

Service: