Ruby is unavailable for general access. Please follow this link to request access.
TIP: Remember to check the menu to the right of the page for related pages with more information about Ruby's specifics.

Ruby, named after the Ohio native actress Ruby Dee, is The Ohio Supercomputer Center's newest cluster.  An HP built, Intel® Xeon® processor-based supercomputer, Ruby provides almost the same amount of total computing power (~144 TF) as our former flagship system Oakley on less than half the number of nodes (240 nodes).  Ruby also features two distinct sets of hardware accelerators; 20 nodes are outfitted with NVIDIA® Tesla K40 and another 20 nodes feature Intel® Xeon® Phi coprocessors.


Detailed system specifications:

  • 4800 total cores
    • 20 cores/node  & 64 gigabytes of memory/node
  • Intel Xeon E5 2670 V2 (Ivy Bridge) CPUs
  • HP SL250 Nodes
  • 20 Intel Xeon Phi 5110p coprocessors
  • 20 NVIDIA Tesla K40 GPUs
  • 2 NVIDIA Tesla K20X GPUs 
    • Both equiped on single "debug" queue node
  • 1 TB of local disk space in '/tmp'
  • FDR IB Interconnect
    • Low latency
    • High throughput
    • High quality-of-service.
  • Theoretical system peak performance
    • 96 teraflops
  • NVIDIA GPU performance
    • 28.6 additional teraflops
  • Intel Xeon Phi performance
    • 20 additional teraflops
  • Total peak performance
    • ~144 teraflops

Ruby has one huge memory node.

  • 32 cores (Intel Xeon E5 4640 CPUs)
  • 1 TB of memory
  • 483 GB of local disk space in '/tmp'

Ruby is configured with two login nodes.

  • Intel Xeon E5-2670 (Sandy Bridge) CPUs
  • 16 cores/node & 128 gigabytes of memory/node


To login to Ruby at OSC, ssh to the following hostname: 

You can either use an ssh client application or execute ssh on the command line in a terminal window as follows:

ssh <username>

From there, you have access to the compilers and other software development tools. You can run programs interactively or through batch requests. See the following sections for details.

File Systems

Ruby accesses the same OSC mass storage environment as our other clusters. Therefore, users have the same home directory as on the Oakley cluster. Full details of the storage environment are available in our storage environment guide.

Software Environment

The module system on Ruby is the same as on the Oakley system. Use  module load <package>  to add a software package to your environment. Use  module list  to see what modules are currently loaded and  module avail  to see the module that are available to load. To search for modules that may not be visible due to dependencies or conflicts, use  module spider . By default, you will have the batch scheduling software modules, the Intel compiler and an appropriate version of mvapich2 loaded.

You can keep up to on the software packages that have been made available on Ruby by viewing the Software by System page and selecting the Ruby system.

Understanding the Xeon Phi

Guidance on what the Phis are, how they can be utilized, and other general information can be found on our Ruby Phi FAQ.

Compiling for the Xeon Phis

For information on compiling for and running software on our Phi coprocessors, see our Phi Compiling Guide.

Batch Specifics

We have recently updated qsub to provide more information to clients about the job they just submitted, including both informational (NOTE) and ERROR messages. To better understand these messages, please visit the messages from qsub page.

Refer to the documentation for our batch environment to understand how to use PBS on OSC hardware. Some specifics you will need to know to create well-formed batch scripts:

  • Compute nodes on Ruby have 20 cores/processors per node (ppn).  
  • If you need more than 64 GB of RAM per node you may run on Ruby's huge memory node ("hugemem").  This node has four Intel Xeon E5-4640 CPUs (8 cores/CPU) for a total of 32 cores.  The node also has 1TB of RAM.  You can schedule this node by adding the following directive to your batch script: #PBS -l nodes=1:ppn=32 .  This node is only for serial jobs, and can only have one job running on it at a time, so you must request the entire node to be scheduled on it.  In addition, there is a walltime limit of 48 hours for jobs on this node.
  • 20 nodes on Ruby are equiped with a single NVIDIA Tesla K40 GPUs.  These nodes can be requested by adding gpus=1 to your nodes request, like so: #PBS -l nodes=1:ppn=20:gpus=1 .
    • By default a GPU is set to the Exclusive Process and Thread compute mode at the beginning of each job.  To request the GPU be set to Default compute mode, add default to your nodes request, like so: #PBS -l nodes=1:ppn=20:gpus=1:default .
  • Ruby has 5 debug nodes which are specifically configured for short (< 1 hour) debugging type work.  These nodes have a walltime limit of 1 hour.  These nodes are equiped with E5-2670 V1 CPUs with 16 cores per a node. 
    • To schedule a debug node:
      #PBS -l nodes=1:ppn=16 -q debug

Using OSC Resources

For more information about how to use OSC resources, please see our guide on batch processing at OSC. For specific information about modules and file storage, please see the Batch Execution Environment page.



Technical Specifications

The following are technical specifications for Ruby.  We hope these may be of use to the advanced user.

  Ruby System (2014)
Number oF nodes 240 nodes
Number of CPU Sockets 480 (2 sockets/node)
Number of CPU Cores 4800 (20 cores/node)
Cores per Node 20 cores/node
Local Disk Space per Node ~800GB in /tmp, SATA
Compute CPU Specifications

Intel Xeon E5-2670 V2

  • 2.5 GHz 
  • 10 cores per processor
Computer Server Specifications

200 HP SL230

40 HP SL250 (for accelerator nodes)

Accelerator Specifications

20 NVIDIA Tesla K40 

  • 1.43 TF peak double-precision performance
  • 1 GK110B GPU 
  • 2880 CUDA cores
  • 12GB memory

20 Intel Xeon Phi 5110p 

  • 1.011 TF peak performance
  • 60 cores
  • 1.053 GHz
  • 8GB memory
Number of accelerator Nodes

40 total 

  • 20 Xeon Phi equiped nodes
  • 20 NVIDIA Tesla K40 equiped nodes
Total Memory ~16TB
Memory Per Node


Memory Per Core 3.2GB
Interconnect  FDR/EN Infiniband (56 Gbps)
Login Specifications

2 Intel Xeon E5-2670

  • 2.6 GHz
  • 16 cores
  • 132GB memory
Special Nodes

Huge Memory (1)

  • Dell PowerEdge R820 Server
  • 4 Intel Xeon E5-4640 CPUs
    • 2.4 GHz
  • 32 cores (8 cores/CPU)
  • 1 TB Memory



Programming Environment


C, C++ and Fortran are supported on the Ruby cluster. Intel, PGI and GNU compiler suites are available. The Intel development tool chain is loaded by default. Compiler commands and recommended options for serial programs are listed in the table below. See also our compilation guide.

C icc -O2 -xHost hello.c pgcc -fast hello.c gcc -O2 -march=native hello.c
Fortran 90 ifort -O2 -xHost hello.f90 pgf90 -fast hello.f90 gfortran -O2 -march=native hello.f90

Parallel Programming


The system uses the MVAPICH2 implementation of the Message Passing Interface (MPI), optimized for the high-speed Infiniband interconnect. MPI is a standard library for performing parallel processing using a distributed-memory model. For more information on building your MPI codes, please visit the MPI Library documentation.

Ruby uses a different version of mpiexec than Oakley. This is necessary because of changes in Torque. All OSC systems use the mpiexec command, but the underlying code on Ruby is mpiexec.hydra while the code on Oakley was developed at OSC. They are largely compatible, but a few differences should be noted.

Caution: There are many variations on mpiexec and mpiexec.hydra. Information found on non-OSC websites may not be applicable to our installation.
Note: Oakley has been updated to use the same mpiexec as Ruby.

The table below shows some commonly used options. Use mpiexec -help for more information.

mpiexec mpiexec Same command on both systems
mpiexec a.out mpiexec ./a.out Program must be in path on Ruby, not necessary on Oakley.
-pernode -ppn 1 One process per node
-npernode procs -ppn procs procs processes per node
-n totalprocs
-np totalprocs
-n totalprocs
-np totalprocs
At most totalprocs processes per node (same on both systems)
-comm none   Omit for simple cases. If using $MPIEXEC_RANK, consider using pbsdsh with $PBS_VNODENUM.
-comm anything_else   Omit. Ignored on Oakley, will fail on Ruby.
  -prepend-rank Prepend rank to output
-help -help Get a list of available options

mpiexec will normally spawn one MPI process per CPU core requested in a batch job. The -pernode option is not supported by mpiexec on Ruby, instead use -ppn 1 as mentioned in the table above.


The Intel, PGI and gnu compilers understand the OpenMP set of directives, which give the programmer a finer control over the parallelization. For more information on building OpenMP codes on OSC systems, please visit the OpenMP documentation.

GPU and Phi Programming

To request the GPU node on Ruby, use nodes=1:ppn=20:gpus=1. For GPU programming with CUDA, please refer to CUDA documentation. Also refer to the page of each software to check whether it is GPU enabled.

To request the Xeon Phi (MIC) node on Ruby, use nodes=1:ppn=20:mics=1. For Phi programming, please refer to Ruby Phi FAQ and Phi Compiling Guide


Executing Programs

Batch Requests

Batch requests are handled by the TORQUE resource manager and Moab Scheduler as on the Oakley system. Use the qsub command to submit a batch request, qstat to view the status of your requests, and qdel to delete unwanted requests. For more information, see the manual pages for each command.

There are some changes for Ruby, they are listed here:

  • Ruby nodes have 20 cores per node, and 64 GB of memory per node. This is less memory per core than on Oakley.
  • Ruby will be allocated on the basis of whole nodes even for jobs using less than 20 cores.
  • The amount of local disk space available on a node is approximately 800 GB.
  • MPI Parallel Programs should be run with mpiexec, as on Oakley, but the underlying program is mpiexec.hydra instead of OSC's mpiexec. Type mpiexec --help for information on the command line options.

Example Serial Job

This particular example uses OpenMP.

  #PBS -l walltime=1:00:00
  #PBS -l nodes=1:ppn=20
  #PBS -N my_job
  #PBS -j oe

  cd $TMPDIR
  cp $HOME/science/my_program.f .
  ifort -O2 -openmp my_program.f
  export OMP_NUM_PROCS=20
  ./a.out > my_results
  cp my_results $HOME/science

Please remember that jobs on Ruby must use a complete node.

Example Parallel Job

    #PBS -l walltime=1:00:00
    #PBS -l nodes=4:ppn=20
    #PBS -N my_job
    #PBS -j oe

    cd $HOME/science
    mpif90 -O3 mpiprogram.f
    cp a.out $TMPDIR
    cd $TMPDIR
    mpiexec ./a.out > my_results
    cp my_results $HOME/science

For more information about how to use OSC resources, please see our guide on batch processing at OSC. For specific information about modules and file storage, please see the Batch Execution Environment page.


Queues and Reservations

Here are the queues available on Ruby. Please note that you will be routed to the appropriate queue based on your walltime and job size request.

Name Nodes available max walltime max job size notes


Available minus reservations

168 hours

1 node



Available minus reservations

96 hours

40 nodes




48 hours

1 node

32 core with 1 TB RAM
Debug 5 1 hour 2 nodes

16 core with 128GB RAM

For small interactive and test jobs. 

Use "-q debug" to request it 

"Available minus reservations" means all nodes in the cluster currently operational (this will fluctuate slightly), less the reservations. To access one of the restricted queues, please contact OSC Help. Generally, access will only be granted to these queues if performance of the job cannot be improved, and job size cannot be reduced by splitting or checkpointing the job.

Occasionally, reservations will be created for specific projects.

Approximately half of the Ruby nodes are a part of client condo reservations. Only jobs of short duration are eligible to run on these nodes, and only when they are not in use by the condo clients. As a result, your job(s) may have to wait for eligible resources to come available while it appears that much of the cluster is idle.

Batch Limit Rules

Full Node Charging Policy

On Ruby, we always allocate whole nodes to jobs and charge for the whole node. If a job requests less than a full node (nodes=1:ppn<20), the job execution environment is what is requested (the job only has access to the # of cores according to ppn request) with 64GB of RAM; however, the job will be allocated whole node and charge for the whole node. A job that requests nodes>1 will be assigned the entire nodes with 64GB/node and charged for the entire nodes regardless of ppn request.  A job that requests huge-memory node (nodes=1:ppn=32) will be allocated the entire huge-memory node with 1TB of RAM and charged for the whole node (32 cores worth of RU).

To manage and monitor your memory usage, please refer to Out-of-Memory (OOM) or Excessive Memory Usage.

Queue Default

Please keep in mind that if you submits a job with no node specification, the default is nodes=1:ppn=20, while if you submits a job with no ppn specified, the default is nodes=N:ppn=1

Debug Node

Ruby has 5 debug nodes which are specifically configured for short (< 1 hour) debugging type work. These nodes have a walltime limit of 1 hour. These nodes are equiped with E5-2670 V1 CPUs with 16 cores per a node.  To schedule a debug node, use nodes=1:ppn=16 -q debug

GPU and Intel Xeon Phi (MIC) Node

On Ruby, 20 nodes are equipped with NVIDIA Tesla K40 GPUs (one GPU with each node).  These nodes can be requested by adding gpus=1 to your nodes request (nodes=1:ppn=20:gpus=1). 20 nodes are equipped with Intel Xeon Phi (MIC) accelerators (one MIC with each node). These nodes can be requested by adding mics=1 to your nodes request (nodes=1:ppn=20:mics=1)

Walltime Limit

Here are the queues available on Ruby:






168 hours

1 node



96 hours

40 nodes



48 hours

1 node

32 core with 1 TB RAM


1 hour

6 nodes

16 core with 128GB RAM

Job Limit

An individual user can have up to 40 concurrently running jobs and/or up to 800 processors/cores in use. All the users in a particular group/project can among them have up to 80 concurrently running jobs and/or up to 1600 processors/cores in use if the system is busy. Debug queue is 1 job at a time per user. For Condo users, please contact OSC Help for more instructions.



For more information about citations of OSC, visit

To cite Ruby, please use the following Archival Resource Key:


Please adjust this citation to fit the citation style guidelines required.

Ohio Supercomputer Center. 2015. Ruby Supercomputer. Columbus, OH: Ohio Supercomputer Center.

Here is the citation in BibTeX format:

ark = {ark:/19495/hpc93fc8},
url = {},
year  = {2015},
author = {Ohio Supercomputer Center},
title = {Ruby Supercomputer}

And in EndNote format:

%0 Generic
%T Ruby Supercomputer
%A Ohio Supercomputer Center
%R ark:/19495/hpc93fc8
%D 2015

Here is an .ris file to better suit your needs. Please change the import option to .ris.

Documentation Attachment: 

Request Access

Projects who would like to use the Ruby cluster will need to request access.  This is because of the particulars of the Ruby environment, which includes its size, MICs, GPUs, and scheduling policies.  


Access to Ruby is done on a case by case basis because:

  • It is a smaller machine than Oakley, and thus has limited space for users
    • Oakley has 694 nodes, while Ruby only has 240 nodes.
  • It's CPUs are less general, and therefore more consideration is required to get optimal performance
  • Scheduling is done on a per-node basis, and therefore jobs must scale to this level at a bare minimum 
  • Additional consideration is required to get full performance out of its MICs and GPUs

Good Ruby Workload Characteristics

Those interested in using Ruby should check that their work is well suited for it by using the following list.  Ideal workloads will exhibit one or more of the following characteristics:

  • Work scales well to large core counts
    • No single core jobs
    • Scales well past 2 nodes on Oakley
  • Needs access to Ruby specific hardware (MICs or GPUs)
  • Memory bound work
  • Software:
    • Supports MICs or GPUs
    • Takes advantage of:
      • Long vector length
      • Higher core count
      • Improved Memory Bandwidth

Applying for Access

Those who would like to be considered for Ruby access should send the following in a email to OSC Help:

  • Name
  • Project ID
  • Plan for using Ruby
  • Evidence of workload being well suited for Ruby