Search Documentation

Search Documentation

The storage at OSC consists of servers, data storage subsystems, and networks providing a number of storage services to OSC HPC systems. The current configuration consists of:

Cardinal
LS-DYNA is fully migrated from the Owens cluster to the Cardinal cluster on December 17, 2024. 

LS-DYNA is a general purpose finite element code for simulating complex structural problems, specializing in nonlinear, transient dynamic problems using explicit integration. LS-DYNA is one of the codes developed at Livermore Software Technology Corporation (LSTC).

Ascend, Cardinal, Pitzer

The Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a classical molecular dynamics code designed for high-performance simulation of large atomistic systems.  LAMMPS generally scales well on OSC platforms, provides a variety of modeling techniques, and offers GPU accelerated computation.

Ascend, Cardinal, Pitzer

The Intel compilers for both C/C++ and FORTRAN.

Ascend, Cardinal, Pitzer
OSC has recently switched schedulers from PBS to Slurm.
Please see the slurm migration pages for information about how to convert commands.

Batch processing

Efficiently using computing resources at OSC requires using the batch processing system. Batch processing refers to submitting requests to the system to use computing resources.

Ascend, Cardinal, Pitzer

Shell and initialization

Ascend, Cardinal, Pitzer

HDF5 is a general purpose library and file format for storing scientific data. HDF5 can store two primary objects: datasets and groups. A dataset is essentially a multidimensional array of data elements, and a group is a structure for organizing objects in an HDF5 file. Using these two basic objects, one can create and store almost any kind of scientific data structure, such as images, arrays of vectors, and structured and unstructured grids.

Ascend, Cardinal, Pitzer

GROMACS is a versatile package of molecular dynamics simulation programs. It is primarily designed for biochemical molecules, but it has also been used on non-biological systems.  GROMACS generally scales well on OSC platforms. Starting with version 4.6 GROMACS includes GPU acceleration.

Cardinal, Pitzer

The only access to significant resources on the HPC machines is through the batch process.

Pages