"Condo model" refers to an arrangment when a client leases one or more compute nodes for the shared cluster while OSC provides all infrastructure, as well as maintenance and services. BMI's Condo on the Owens Cluster is leased by the Biomedical Informatics Institute at The Ohio State University.
On Owens, the condo specifications are:
- 32 nodes
- 28 cores per node
- 128 GB of memory per node
- 1 TB of local disk space
- Intel Xeon E5-2680 v4 CPUs
- Dell PowerEdge C6320 nodes
- EDR IB Interconnect
In addition, 114 TB of high performance storage has been leased as a part of the condo.
BMI's condo is only available to approved users. Condo users are guaranteed access to their nodes; idle nodes may be made available to general users for jobs of less than 4 hours in duration.
To access the condo, you need to log in to Owens at OSC by connecting to the following hostname:
You can either use an ssh client application or execute ssh on the command line in a terminal window as follows:
From there, you can run programs interactively (only for small and test jobs) or through batch requests. For more info on the Owens Cluster, please refer to the Owens Documentation Page.
For example, you get 2 condo nodes for 2 hours by the following command:
qsub -l nodes=2:ppn=28 -l walltime=2:00:00 –A PCON0005
You can also run jobs outside the condo with your academic project code. For example, you get 1 node on the Owens cluster for 2 hours by the following command, where "PAS????" is replaced with an academic project code you are eligible to charge against:
qsub -l nodes=1:ppn=28 –l walltime=2:00:00 –A PAS????
Owens allows single core scheduling. For more information, please refer to the general Owens documentation.
BMI's condo uses the same OSC mass storage environment as our other clusters. Therefore, condo users have the same home directory as on the Owens, Ruby, and Oakley clusters. Large amounts of project storage is available on our Project storage service. Full details of the storage environment are available in our storage environment guide.
Where to Find Your Data
We have migrated all data from the Bucki cluster to OSC's storage environment. You can locate your data at
|Bucki Path||OSC Path|
|Owens Path||OSC Path||Note|
Users on the condo nodes have access to all software packages installed on the Oakley Cluster or Owens Cluster. By default, you will have the batch scheduling software modules, the Intel compiler and an appropriate version of mvapich2 loaded. Use
module load <package> to add a software package to your environment. Use
module list to see what modules are currently loaded and
module avail to see the modules that are available to load. To search for modules not be visible due to dependencies or conflicts, use
module spider .
You can keep informed of the software packages that have been made available on Oakley by viewing the Software by System page and selecting the Oakley system.
You can see what packages are available on Owens by viewing the Software by System page and selecting the Owens system. During the Owens Early Access period, we are documenting software on the Owens Early User Information page.
You may check to see the list of previoulsy installed bioinformatics software from bioinformatics & biology software.
Using OSC Resources
For more information about how to use OSC resources, please see our guide on batch processing at OSC. For specific information about modules and file storage, please see the Batch Execution Environment page.
Contact OSC Help if you have any other questions, or need other assistance.