High Performance Computing Systems

Please follow HPC Notices on Twitter - They are specifically designed to keep OSC clients up to date on system downtimes, outages, maintenance and software updates.

You can can also see a full list of current system notices here.

OSC's High Performance Computing Systems


Oakley, the Ohio Supercomputer Center HP Intel Xeon Cluster

OSC’s newest system, an HP-built, Intel® Xeon® processor-based supercomputer dubbed the Oakley Cluster, features more cores (8,328) on half as many nodes (694) as the center’s former flagshipsystem, the IBM Opteron 1350 Glenn Cluster. The Oakley Cluster can achieve 88 teraflops, tech-speak for performing 88 trillion floating point operations per second, or, with acceleration from 128 NVIDIA® Tesla graphic processing units (GPUs), a total peak performance of just over 154 teraflops.

Photo: OSC Oakley HP Intel Xeon Cluster

Detailed system specifications:

  • 8,328 total cores
    • 12 cores/node  & 48 gigabytes of memory/node
  • Intel Xeon x5650 CPUs
  • HP SL390 G7 Nodes
  • 128 NVIDIA Tesla M2070 GPUs
  • 873 GB of local disk space in '/tmp'
  • QDR IB Interconnect
    • Low latency
    • High throughput
    • High quality-of-service.
  • Theoretical system peak performance
    • 88.6 TF
  • GPU acceleration
    • Additional 65.5 TF
  • Total peak performance
    • 154.1 TF
  • Memory Increase
    • Increases memory from 2.5 GB per core to 4.0 GB per core.
  • Storage Expansion
    • Adds 600 TB of DataDirect Networks Lustre storage for a total of nearly two PB of available disk storage.
  • System Efficiency
    • 1.5x the performance of former system at just 60 percent of current power consumption.

Using Oakley, the HP Intel Xeon Cluster

Glenn, the Ohio Supercomputer Center IBM Cluster 1350

The Ohio Supercomputer Center's IBM Cluster 1350, named "Glenn", features AMD Opteron multi-core technologies. The system offers a peak performance of more than 54 trillion floating point operations per second and a variety of memory and processor configurations. The current Glenn Phase II components were installed and deployed in 2009, while the earlier phase of Glenn – now decommissioned – had been installed and deployed in 2007.

The current hardware configuration consists of the following:Photo: OSC Glenn IBM 1350 Cluster

  • 658 System x3455 compute nodes
    • Dual socket, quad core 2.5 GHz Opterons
    • 24 GB RAM
    • 393 GB local disk space in /tmp
  • 4 System x3755 login nodes
    • Quad socket 2 dual core 2.6 GHz Opterons
    • 8 GB RAM
  • Voltaire 20 Gbps PCI Express adapters

There are 36 GPU-capable nodes on Glenn, connected to 18 Quadro Plex S4's for a total of 72 CUDA-enabled graphics devices. Each node has access to two Quadro FX 5800-level graphics cards.

  • Each Quadro Plex S4 has these specs:
    • Each Quadro Plex S4 contains 4 Quadro FX 5800 GPUs
    • 240 cores per GPU
    • 4 GB Memory per card
  • The 36 compute nodes in Glenn contain:
    • Dual socket, quad core 2.5 GHz Opterons
    • 24 GB RAM
    • 393 local disk space in '/tmp'
    • 20 Gb/s Infiniband ConnectX host channel adapater (HCA)

Using Glenn, the IBM 1350 Opteron Cluster at OSC

Ohio Supercomputer Center Mass Storage Environment

The Mass Storage Environment (MSE) at OSC consists of servers, data storage
subsystems, and networks providing a number of storage services to OSC HPC
systems.  The current configuration consists of:Photo: OSC Mass Storage System


  • Cisco MDS9509 Storage Directors with Fibre Channel and iSCSI blades
  • IBM FastT-900 storage servers
  • Hitachi AMS1000 storage
  • DataDirect Networks 9900 storage
  • local disk storage on each compute node
  • One IBM 3584 tape robot:
    • Four Fibre Channel links.
    • Eight LTO tape drives.
    • 400 TB (raw capacity) of LTO tapes.
  • The servers for the home directories are 18 x3650s with the following:
    • 16 GB memory
    • 2 Intel Xeon E5440 2.83GHz quad-core CPUs
    • 2, 1 Gb/s Ethernet network interfaces
    • 4Gb/s Fibre-channel host bus adapter


Home Directory Service

The MSE provides common home directories across all OSC HPC systems. Each userid has a quota of 500 GB storage and 1,000,000 files for its home directory tree.

For projects that require more than 500 GB storage and/or more than 1,000,000 files, additional storage space is available. Principal Investigators should contact the OSC consultation service about procedures for making additional storage available in the "project" space outside the home directory.

Note: To transfer files to/from OSC systems, use scp or sftp between your system and any of the HPC login nodes. Since home directories and "project" space are shared across all systems, you can use any system to transfer files.

The local storage on each compute node is used for scratch space (/tmp) and spool space for stdout and stderr from batch jobs.  Some nodes have large scratch space.  See the usage sections for each cluster for additional information.

Parallel File System Service

The MSE provides a parallel file system for use as high-performance shared scratch space using Lustre. The current capacity of the Lustre subsystem is about 600 TB.

Backup Service

All files on the home directories and on the "project" space are backed up daily. Two copies of files in the home directories are written to tape in the tape library. A single copy of files on the "project" space is written to tape. Files on the local scratch (/tmp) and on PVFS are not backed up.

Usage Guidelines

For more detailed information about the available file systems, and some guidance about how to best utilize them, please see our storage environment guide.