Condo model refers to that the participants (condo owners) purchase one or more compute nodes for the shared cluster while OSC provides all infrastructure, as well as maintenance and services. CCAPP Condo on Ruby cluster is owned by the Center for Cosmology and AstroParticle Physics, at OSU. Prof. Annika Peter has been heavily involved in specifying requirements.
Detailed system specifications:
21 total nodes
20 cores per node
64 GB of memory per node
1 TB of local disk space
Intel Xeon E5 2670 V2 CPUs
HP SL250 Nodes
FDR IB Interconnect
CCAPP Condo is only accessible by users under project account PCON0003. Condo users are guaranteed access to their nodes within 4 hours of a job submission to their respective queue on the cluster, provided that resources are available.
Before getting access to the condo, you need to login to Ruby at OSC by connecting to the following hostname:
You can either use an ssh client application or execute ssh on the command line in a terminal window as follows:
From there, you can run programs interactively (only for small and test jobs) or through batch requests. You get access to the condo by adding
-A PCON0003 in your request. You can also run programs out of the condo by adding
-A PAS1005 (or your project code, if it is different) in your request, which allows you to get access to the “normal” Ruby compute nodes (the nodes out of condos), some of which have Intel Xeon Phi 5110p accelerators or NVIDIA Tesla K40 GPUs. For more info on Ruby Cluster, please refer to Ruby Documentation Page.
For example, you get 2 CCAPP condo nodes for 2 hours by the following command:
qsub -l nodes=2:ppn=20 -l walltime=2:00:00 –A PCON0003
CCAPP's condo accesses the same OSC mass storage environment as our other clusters. Therefore, condo users have the same home directory as on the Ruby and Oakley clusters. Full details of the storage environment are available in our storage environment guide.
Users on the condo nodes have access to all software packages installed on Ruby Cluster. By default, you will have the batch scheduling software modules, the Intel compiler and an appropriate version of mvapich2 loaded. Use
module load <package> to add a software package to your environment. Use
module list to see what modules are currently loaded and
module avail to see the modules that are available to load. To search for modules not be visible due to dependencies or conflicts, use
module spider .
You can keep informed of the software packages that have been made available on Ruby by viewing the Software by System page and selecting the Ruby system.
Using OSC Resources
For more information about how to use OSC resources, please see our guide on batch processing at OSC. For specific information about modules and file storage, please see the Batch Execution Environment page.