Ruby system (2014) oakley system (2012) glenn system (2009)
Theoretical Peak Performance 96 TF
+28.6 TF (GPU)
+20 TF (Xeon Phi)
~144 TF
88.6 TF
+65.5 TF (GPU)
~154 TF
+6 TF (GPU)
~40 TF
# of Nodes / Sockets / Cores 240 / 480 / 4800 692 / 1384 / 8304 426 / 856 / 3408
Cores per Node 20 cores/node 12 cores/node 8 cores/node
Local Disk Space per Node ~800 GB in /tmp ~800 GB in /tmp ~400 GB in /tmp
Compute CPU Specs Intel Xeon E5-2670 v2 CPUs
• 2.5 GHz
• 10 cores per processor
Intel Xeon x5650 CPUs
• 2.67 GHz
• 6 cores per processor
AMD Opteron 2380 CPUs
• 2.5 GHz
• 4 cores per processor
Compute Server Specs 200 HP SL230
40 HP SL250 (for NVIDIA GPU/Intel Xeon Phi)
HP SL390 G7 IBM x3455
# / Kind of GPU/Accelerators 20 NVIDIA Tesla K40
• 1.43 TF Peak double-precision
• 2880 CUDA cores
• 12GB memory
20 Xeon Phi 5110p
• 1.011 TF Peak
• 60 cores
• 1.053 GHz
• 8 GB memory
128 NVIDIA M2070
• 515 GFlops Peak Double Precision
• 6 GB memory
• 448 CUDA cores
18 NVIDIA Quadro Plex 2200 S4
• Each with 4 Quadro FX 5800 GPUs
• 240 CUDA Cores/GPU
• 4 GB memory/GPU
# of GPU / Accelerator Nodes 40 total (20 of each type) 64 Nodes (2 GPUs/node) 36 Nodes (2 GPUs/Node)
Total Memory ~16 TB ~33 TB ~10 TB
Memory per Node / per Core 64 GB / 3.2 GB 48 GB / 4 GB 24 GB / 3 GB
Interconnect FDR/EN IB (56 Gbps) QDR IB (40 Gbps) DDR IB (20 Gbps)


This is an exciting time to be a systems engineer at the Ohio Supercomputer Center. The Center is currently running three mid-sized High Performance Computing (HPC) clusters: the just-launched HP/Intel Xeon Phi Ruby Cluster, the HP/Intel Xeon Oakley Cluster and the IBM/AMD Opteron Glenn Cluster, as well as a storage environment with several petabytes of total capacity across a variety of file systems. If that wasn’t enough, we’re preparing for the acquisition in 2015 of a new system that will exceed the peak performance of all the existing systems combined. To compliment the new system, upgrades and expansions of OSC’s storage and networking environments will be performed.

The deployments in 2015 will represent some of the largest increases in performance for the Center.


There’s obviously a bit of a difference between the hardware we bought five years ago and the hardware we’re buying today; the seven racks of the new Ruby Cluster house 240 nodes and provide a total peak performance of over 140 teraflops, compared to the remaining section of the Glenn Cluster, at roughly 425 nodes in 18 racks with a peak performance of about 40 teraflops. We look at Ruby as a transitional system that will feature newer hardware than Oakley and additional capacity. We like to think of Ruby as a stepping-stone for researchers to be prepared for the next large system at OSC.

As for the 2015 system, our target is somewhere in the range of 1 petaflop for peak performance. That’s a big jump; we’re not just doubling or tripling performance, we’re looking at something that is close to an order of magnitude higher in performance than any individual cluster on the floor at OSC. That’s going to really change the nature of the types of problems our systems can handle and the amount of throughput they can deliver. Since there is a good chance Glenn is going to have to be turned off in preparation for the 2015 system this increases the importance of the Ruby cluster. It will provide additional computational capacity while we are physically removing the Glenn Cluster, and before the new system is available. The software environment on Ruby will also be a good stepping-stone to the next system.


We’ve just completed a migration of data from 14 individual file servers and file systems into a single 1.1 petabyte file system. This architecture prepares us for future growth; we expect to increase capacity and performance of this new file system to 4-5 PB and 40-50 gigabytes-per-second to support the new system. Other improvements will be made to our storage environment based on the needs of our user community. These will include expansions to our backup systems to accommodate the additional storage, new high performance scratch file systems for working data, and new storage for user home directories.


We’re going to have a couple of areas of improvement in our network. We’re going to replace the core router that OSC uses to connect to the Internet through OARnet. We’re going to upgrade our OARnet connection to 40 gigabits per second, but we’ll also have a backup router with a 10 gigabits per second connection to provide a redundant connection. We’ll also upgrade our peer connection to OSU to a 40 gigabits-per-second connection. And, both of these upgrades are designed for an eventual upgrade to 100 Gbps connections to match what’s already available on the OARnet backbone. We’ve also just upgraded the InfiniBand spine switches for our high speed interconnect fabric to allow for additional capacity, not only for the Ruby Cluster, but also for the next large system as well.


Ruby Cluster: Named for acclaimed actress, author and activist

The name of the Ohio Supercomputer Center’s newest system pays tribute to actress, writer and civil rights activist Ruby Dee. Recent OSC systems have been named after Ohioans known as pioneers in diverse careers: The Glenn Cluster for astronaut and statesman John Glenn; the ARMSTRONG research portal for astronaut Neil Armstrong, the Csuri Advanced GPU environment for computer artist Charles “Chuck” Csuri, and the Oakley Cluster for legendary sharpshooter and social advocate Annie Oakley.

Born in Cleveland, Ohio, Ruby Ann Wallace grew up in Harlem. She graduated in 1945 from Hunter College with degrees in French and Spanish. Dee was married for a short time to Frank Dee Brown, before marrying actor Ossie Davis in 1948 and raising a family of three children together.

Dee debuted on Broadway in the late 1940s and then built a film career spanning several generations. Dee received numerous acting awards, including an Oscar nomination in 2008 for playing Mama Lucas in American Gangster.
Dee and Davis were very active in social and racial equality issues, first within the entertainment industry and then later throughout the Civil Rights movement of the 1960s.

In 1998, Dee and Davis celebrated their 50th wedding anniversary with the publication of a Grammy-winning dual autobiography. Dee and Davis remained married until his death in 2005, and Dee died in June of 2014.