Search Documentation

Search Documentation

Cardinal
We have prepared "Getting Started with Cardinal" course on the ScarletCanvas platform. This course offers essential guidance for migrating jobs from other clusters to the Cardinal cluster at the Ohio Supercomputer Center (OSC).
Cardinal

The Cardinal cluster is now running on Red Hat Enterprise Linux (RHEL) 9, introducing several software-related changes compared to the RHEL 7 environment used on the Pitzer cluster. These updates provide access to modern tools and libraries but may also require adjustments to your workflows. Key software changes and available software are outlined in the following sections.

Cardinal

Overview of the High Bandwidth Memory on Cardinal's Dense compute nodes

Cardinal

Compilers

The Cardinal cluster supports C, C++, and Fortran programming languages. The available compiler suites include Intel, oneAPI, and GCC. By default, the Intel development toolchain is loaded. The table below lists the compiler commands and recommended options for compiling serial programs. For more details and best practices, please refer to our compilation guide.

Cardinal

These are the public key fingerprints for Cardinal:

cardinal: ssh_host_rsa_key.pub = 73:f2:07:6c:76:b4:68:49:86:ed:ef:a3:55:90:58:1b
cardinal: ssh_host_ed25519_key.pub = 93:76:68:f0:be:f1:4a:89:30:e2:86:27:1e:64:9c:09
cardinal: ssh_host_ecdsa_key.pub = e0:83:14:8f:d4:c3:c5:6c:c6:b6:0a:f7:df:bc:e9:2e

PyTorch Fully Sharded Data Parallel (FSDP) is used to speed-up model training time by parallelizing training data as well as sharding model parameters, optimizer states, and gradients across multiple pytorch instances.

 

Cardinal
The PBS compatibility layer is disabled on Cardinal so PBS batch scripts WON'T work on Cardinal, though it works on the Pitzer cluster. You also need to use sbatch (instead of qsub) command to submit jobs. Refer to the Slurm migration page to understand how to use Slurm. 

Memory limit

Ascend, Cardinal, Pitzer

MVAPICH is a standard library for performing parallel processing using a distributed-memory model. 

PyTorch Distributed Data Parallel (DDP) is used to speed-up model training time by parallelizing training data across multiple identical model instances.

 

Cardinal

AutoDock is a a suite of automated docking tools. It is designed to predict how small molecules, such as substrates or drug candidates, bind to a receptor of known 3D structure. AutoDock has applications in X-ray crystallography, structure-based drug design, lead optimization, etc.

Pages