Photo: Image of the Glenn supercomputer

The Ohio Supercomputer Center's IBM Cluster 1350, named "Glenn", features AMD Opteron multi-core technologies. The system offers a peak performance of more than 54 trillion floating point operations per second and a variety of memory and processor configurations. The current Glenn Phase II components were installed and deployed in 2009, while the earlier phase of Glenn – now decommissioned – had been installed and deployed in 2007.

<--break->2014/07/12: 222 nodes of Glenn have been removed. 
In preparation for the delivery and installation of 240 nodes for Ruby, we have removed 222 nodes of Glenn. This reduction in available compute is required in order to ensure we stay under power limits at the SOCC and are able to physically configure the space in advance of the delivery of new hardware.


The current hardware configuration consists of the following:

  • 436 System x3455 compute nodes
    • Dual socket, quad core 2.5 GHz Opterons
    • 24 GB RAM
    • 393 GB local disk space in /tmp
  • 2 System x3755 login nodes
    • Quad socket quad core 2.4 GHz Opterons
    • 64 GB RAM
  • Voltaire 20 Gbps PCI Express adapters

There are 36 GPU-capable nodes on Glenn, connected to 18 Quadro Plex S4's for a total of 72 CUDA-enabled graphics devices. Each node has access to two Quadro FX 5800-level graphics cards.

  • Each Quadro Plex S4 has these specs:
    • Each Quadro Plex S4 contains 4 Quadro FX 5800 GPUs
    • 240 cores per GPU
    • 4GB Memory per card
  • The 36 compute nodes in Glenn contain:
    • Dual socket, quad core 2.5 GHz Opterons
    • 24 GB RAM
    • 393 GB local disk space in '/tmp'
    • 20Gb/s Infiniband ConnectX host channel adapater (HCA)

How to Connect

To connect to Glenn, ssh to

Batch Specifics

Refer to the documenation for our batch environment to understand how to use PBS on OSC hardware. Some specifics you will need to know to create well-formed batch scripts:

  • All compute nodes on Glenn are 8 cores/processors per node (ppn). Parallel jobs must use ppn=8.
  • If you need more than 24 GB of RAM per node, you will need to run your job on Oakley
  • GPU jobs must request whole nodes (ppn=8) and are allocated two GPUs each.

Using OSC Resources

For more information about how to use OSC resources, please see our guide on batch processing at OSC. For specific information about modules and file storage, please see the Batch Execution Environment page.


HPC Changelog

There are no changelog entries for this system.