Glenn

Photo: Image of the Glenn supercomputer

The Ohio Supercomputer Center's IBM Cluster 1350, named "Glenn", features AMD Opteron multi-core technologies. The system offers a peak performance of more than 54 trillion floating point operations per second and a variety of memory and processor configurations. The current Glenn Phase II components were installed and deployed in 2009, while the earlier phase of Glenn – now decommissioned – had been installed and deployed in 2007.

<--break->2014/01/22: THE EIGHT GLENN LARGE MEMORY NODES HAVE BEEN REMOVED. 
The eight Glenn large memory nodes have been removed, to be reused as upgraded login nodes. All compute nodes on Oakley can match the 4 GB of RAM/core that was available in these nodes; if you need more than 48 GB of RAM in a single node, you can access one of the 8 nodes on Oakley with 192 GB of RAM, or the single node with 1 TB of RAM. For information about how to request those resources, please see http://www.osc.edu/supercomputing/computing/oakley, or contact OSC Help for assistance.

Hardware

The current hardware configuration consists of the following:

  • 658 System x3455 compute nodes
    • Dual socket, quad core 2.5 GHz Opterons
    • 24 GB RAM
    • 393 GB local disk space in /tmp
  • 4 System x3755 login nodes
    • Quad socket 2 dual core 2.6 GHz Opterons
    • 8 GB RAM
  • Voltaire 20 Gbps PCI Express adapters

There are 36 GPU-capable nodes on Glenn, connected to 18 Quadro Plex S4's for a total of 72 CUDA-enabled graphics devices. Each node has access to two Quadro FX 5800-level graphics cards.

  • Each Quadro Plex S4 has these specs:
    • Each Quadro Plex S4 contains 4 Quadro FX 5800 GPUs
    • 240 cores per GPU
    • 4GB Memory per card
  • The 36 compute nodes in Glenn contain:
    • Dual socket, quad core 2.5 GHz Opterons
    • 24 GB RAM
    • 393 GB local disk space in '/tmp'
    • 20Gb/s Infiniband ConnectX host channel adapater (HCA)

How to Connect

To connect to Glenn, ssh to glenn.osc.edu.

Batch Specifics

Refer to the documenation for our batch environment to understand how to use PBS on OSC hardware. Some specifics you will need to know to create well-formed batch scripts:

  • All compute nodes on Glenn are 8 cores/processors per node (ppn). Parallel jobs must use ppn=8.
  • If you need more than 24 GB of RAM per node, you will need to run your job on Oakley
  • GPU jobs must request whole nodes (ppn=8) and are allocated two GPUs each.

Using OSC Resources

For more information about how to use OSC resources, please see our guide on batch processing at OSC. For specific information about modules and file storage, please see the Batch Execution Environment page.

Supercomputer: 
Service: 

Queues and Reservations

Here are the queues available on Glenn. Please note that you will be routed to the appropriate queue based on your walltime and job size request.

Name Nodes available max walltime max job size notes

Serial

Available minus reservations

168 hours

1 node

 

Longserial

Available minus reservations

336 hours

1 node

Restricted access

Parallel

Available minus reservations

96 hours

256 nodes

 

Dedicated

Entire cluster

48 hours

965 nodes

Restricted access

"Available minus reservations" means all nodes in the cluster currently operational (this will fluctuate slightly), less the reservations listed below. To access one of the restricted queues, please contact OSC Help. Generally, access will only be granted to these queues if performance of the job cannot be improved, and job size cannot be reduced by splitting or checkpointing the job.

In addition, there are a few standing reservations.

Name Times Nodes Available Max Walltime Max job size notes
Debug 8AM-6PM Weekdays 16 1 hour 16 nodes For small interactive and test jobs.
GPU ALL 32 336 hours 32 nodes

Small jobs not requiring GPUs from the serial and parallel queues will backfill on this reservation.

Occasionally, reservations will be created for specific projects that will not be reflected in these tables.

Supercomputer: 
Service: