Honscheid Condo on Pitzer Cluster

Condo model refers to that the participants (condo owners) lease one or more compute nodes for the shared cluster while OSC provides all infrastructure, as well as maintenance and services. The Honscheid Condo on the Pitzer cluster is owned by Klaus Honscheid from OSU Physics.

Hardware

Detailed system specifications:

  • 4 total nodes

    • 40 cores per node

    • 192 GB of memory per node

    • 1 TB of local disk space

  • Intel Xeon 6148 CPUs

  • Dell PowerEdge C6420 Nodes

  • EDR IB Interconnect

    • Low latency

    • High throughput

    • High quality-of-service

Connecting

The Honscheid Condo is only accessible by users under project account PCON0008. Condo users have priority access to their hardware, and will preempt running jobs from non-PCON0008 users.

Before getting access to the condo, you need to login to Pitzer at OSC by connecting to the following hostname:

pitzer.osc.edu

You can either use an ssh client application or execute ssh on the command line in a terminal window as follows:

ssh <username>@pitzer.osc.edu

From there, you can run programs interactively (only for small and test jobs) or through batch requests. You get access to the condo by adding -A PCON0008 in your request. You can also run programs out of the condo by adding -A PAS1330 (or your project code, if it is different) in your request, which allows you to get access to the “normal” Pitzer compute nodes (the nodes out of condos). For more info on the Pitzer Cluster, please refer to the Pitzer Documentation Page.

For example, you get 2 condo nodes for 2 hours by the following command:

qsub -l nodes=2:ppn=40 -l walltime=2:00:00 –A PCON0008

 

File Systems

Condo users access the same OSC mass storage environment as our other users. Therefore, condo users have the same home directory as on the Ruby and Owens clusters. Full details of the storage environment are available in our storage environment guide.

Software Environment

Users on the condo nodes have access to all software packages installed on Pizter Cluster. By default, you will have the batch scheduling software modules, the Intel compiler and an appropriate version of mvapich2 loaded. Use module load <package> to add a software package to your environment. Use module list to see what modules are currently loaded and module avail to see the modules that are available to load. To search for modules not be visible due to dependencies or conflicts, use module spider .

You can keep informed of the software packages that have been made available on Pitzer by viewing the Software by System page and selecting the Pitzer system.

Using OSC Resources

For more information about how to use OSC resources, please see our guide on batch processing at OSC. For specific information about modules and file storage, please see the Batch Execution Environment page.

Getting Support

Contact OSC Help or OSU Physics support if your have any other questions, or need assistance. 

Supercomputer: