CCAPP Dedicated Compute

The CCAPP condo is available on the Cardinal cluster beginning Monday, November 18, 2024. Condo on the Pitzer cluster is terminated on December 30, 2024.

Dedicated compute services at OSC (also refered to as Condo model) involves users purchasing one or more compute nodes for the shared cluster while OSC provides the infrastructure, as well as maintenance and services. CCAPP Condo on Cardinal cluster is owned by the Center for Cosmology and AstroParticle Physics, at OSU. 

Hardware

6 dense compute node, each with: 

  • 2 Intel Xeon CPU Max 9470 (Sapphire Rapids, 2.0 GHz) processors 

  • 128 GB HBM2e and 512 GB DDR5 memory 

  • 1.6 TB NVMe local storage 

  • NDR200 Infiniband

  • 96 usable cores with 6 GB memory per core

Connecting

CCAPP Condo is only accessible by users under project account PCON0003. Condo users are guaranteed access to their nodes within 4 hours of a job submission to their respective queue on the cluster, provided that resources are available. 

Before getting access to the condo, you need to login to Cardinal at OSC by connecting to the following hostname:

cardinal.osc.edu

You can either use an ssh client application or execute ssh on the command line in a terminal window as follows:

ssh <username>@cardinal.osc.edu

From there, you can run programs interactively (only for short jobs and test jobs) or through batch requests. You get access to the condo by adding --account=PCON0003 in your request. You can also run programs out of the condo by adding --account=PAS1005 (or your project code, if it is different) in your request, which allows you to get access to the standard non-condo Cardinal compute nodes. For more info on Cardinal Cluster, please refer to Cardinal Documentation Page.

The PBS compatibility layer is disabled on Cardinal so PBS batch scripts WON'T work on Cardinal, though it works on Pitzer cluster.In addition, you need to use sbatch (instead of qsub) command to submit jobs. Refer to the Slurm migration page to understand how to use Slurm. 

For example, specify your project code by:

#SBATCH --account=PCON0003

Jobs may request partial nodes, including both serial (node=1) and multi-node ( nodes>1) jobs.

To request 2 full Cardinal nodes:

#SBATCH --nodes=2 --ntasks-per-node=96

File Systems

CCAPP's condo accesses the same OSC mass storage environment as our other clusters. Therefore, condo users have the same home directory as on other clusters. Full details of the storage environment are available in our storage environment guide.

Software Environment

Users on the condo nodes have access to all software packages installed on Cardinal Cluster.

The Cardinal cluster runs on Red Hat Enterprise Linux (RHEL) 9, introducing several software-related changes compared to the RHEL 7 environment used on the Pitzer cluster. These updates provide access to modern tools and libraries but may also require adjustments to your workflows. Please refer to the Cardinal Software Environment page for key software changes and available software.

Cardinal uses the same module system as the other clusters. 

Use   module load <package>   to add a software package to your environment. Use   module list   to see what modules are currently loaded and  module avail   to see the modules that are available to load. To search for modules that may not be visible due to dependencies or conflicts, use   module spider  

You can keep up to on the software packages that have been made available on Cardinal by viewing the Software by System page and selecting the Cardinal system.

Using OSC Resources

For more information about how to use OSC resources, please see our guide on batch processing at OSC. For specific information about modules and file storage, please see the Batch Execution Environment page.

Getting Support

Here are the presentations on Introduction to OSC Services, Projects & Condos at OSC and How to Bundle Jobs at OSC. These presentations were given in September, 2017.

Supercomputer: