BMI Condo on the Owens Cluster

"Condo model" refers to an arrangment when a client leases one or more compute nodes for the shared cluster while OSC provides all infrastructure, as well as maintenance and services. BMI's Condo on the Owens Cluster is leased by the Biomedical Informatics Institute at The Ohio State University. 

Hardware

On Owens, the condo specifications are:

  • 32 nodes
    • 28 cores per node
    • 128 GB of memory per node
    • 1 TB of local disk space
  • Intel Xeon E5-2680 v4  CPUs
  • Dell PowerEdge C6320 nodes
  • EDR IB Interconnect

In addition, 114 TB of high performance storage has been leased as a part of the condo.

Connecting

BMI's condo is only available to approved users. Condo users are guaranteed access to their nodes; idle nodes may be made available to general users for jobs of less than 4 hours in duration. 

To access the condo, you need to log in to Owens at OSC by connecting to the following hostname:

owens.osc.edu

You can either use an ssh client application or execute ssh on the command line in a terminal window as follows:

ssh <username>@owens.osc.edu

From there, you can run programs interactively (only for small and test jobs) or through batch requests. For more info on the Owens Cluster, please refer to the Owens Documentation Page.

 

BMI Project Codes

These are project codes for users authorized to charge to the BMI condo. Some of these PIs may have additonal projects; Please contact OSC Help if additional users should be authorized to use the BMI condo, or you need assistance with other administrative tasks related to the projects or user accounts.

PROJECT CODE PRINCIPLE INVESTIGATOR TITLE USE
PCON0005 Philip Payne BMI Condo Academic research on condo node
PAS1100 Albert Lai Startup SUG allocated academic research
PAS1207 Kevin Coombes Startup SUG allocated academic research
PAS1204 Ewy Mathe BMI Transfer SUG allocated academic research
PAS0414 Metic Gurcan Startup SUG allocated academic research
PAS1208 Jeffrey Parvin Startup SUG allocated academic research
PAS1203 Kun Huang BMI Transfer SUG allocated academic research
PAS1206 Kimerly Powell BMI Transfer SUG allocated academic research
PAS1029 David Liebner BMI Transfer SUG allocated academic research
PAS1071 Philip Payne Startup SUG allocated academic research

PAS1209

PAS1265

Umit Catalyurek

James Chen

BMI Transfer

Startup

SUG allocated academic research

SUG allocated academic research

 

Running Jobs

For example, you get 2 condo nodes for 2 hours by the following command:

qsub -l nodes=2:ppn=28 -l walltime=2:00:00 –A PCON0005

You can also run jobs outside the condo with your academic project code. For example, you get 1 node on the Owens cluster for 2 hours by the following command, where "PAS????" is replaced with an academic project code you are eligible to charge against:

qsub -l nodes=1:ppn=28 –l walltime=2:00:00 –A PAS????

Owens allows single core scheduling. For more information, please refer to the general Owens documentation.

File Systems

BMI's condo uses the same OSC mass storage environment as our other clusters. Therefore, condo users have the same home directory as on the Owens, Ruby, and Oakley clusters. Large amounts of project storage is available on our Project storage service. Full details of the storage environment are available in our storage environment guide.

Where to Find Your Data

We have migrated all data from the Bucki cluster to OSC's storage environment. You can locate your data at /fs/project/PCON0005 .

Bucki Path OSC Path
/home /fs/project/PCON0005/home
/nas1-data1 /fs/project/PCON0005/nas1-data1
/nas2-data1 /fs/project/PCON0005/nas2-data1
/nas2-data2 /fs/project/PCON0005/nas2-data2
/nas3 /fs/project/PCON0005/nas

 

Owens Path OSC Path Note

/nas1

/fs/project/PCON0005/owens/nas1 /home directory
/nas2 /fs/project/PCON0005/owens/nas2  

 

Please do not move your data around unnecessarily, and please do not move it to your home directories! The location of this data will be the best performing option available to you.

Software Environment

Users on the condo nodes have access to all software packages installed on the Oakley Cluster or Owens Cluster. By default, you will have the batch scheduling software modules, the Intel compiler and an appropriate version of mvapich2 loaded. Use module load <package>  to add a software package to your environment. Use module list  to see what modules are currently loaded and module avail  to see the modules that are available to load. To search for modules not be visible due to dependencies or conflicts, use module spider .

You can keep informed of the software packages that have been made available on Oakley by viewing the Software by System page and selecting the Oakley system.

You can see what packages are available on Owens by viewing the Software by System page and selecting the Owens system. During the Owens Early Access period, we are documenting software on the Owens Early User Information page.

If you have a software requirement that is not currently available, please contact OSC Help. BMI condo support includes software stack support. Please let us know that you are a BMI user, and we'll work with you on ensuring your needs are met.

Software

You may check to see the list of previoulsy installed bioinformatics software from bioinformatics & biology software.

Software Installed version

bedtools

bam2fastq

BamTools

bcftools

Bowtie1

BWA

eXpress

FAST-XToolkit

GATK

GMAP

HOMER

MIRA

miRDeep2

MuTect

Picard

RNA-SeQc

SAMtools

SnpEff

SRA Toolkit

STAR

STAR-Fusion

Subread

Trimmomatic

VarScan

VCFtools

 

2.25.0

1.1.0

2.2.2

1.3.1

1.1.2

0.7.13

1.5.1

0.0.14

3.5

5/25/2016

4.8

4.0.2

2.0.0.8

1.1.4

2.3.0

1.1.8

1.3.1

4.2

2.6.3

2.5.2a

0.7.0

1.5.0-p2

0.36

2.4.1

0.1.14

Using OSC Resources

For more information about how to use OSC resources, please see our guide on batch processing at OSC. For specific information about modules and file storage, please see the Batch Execution Environment page.

Getting Support

Contact OSC Help if you have any other questions, or need other assistance. 

Service: