In March 2017, OSC unveiled the most powerful system in the history of the Center, the Dell/Intel Xeon Owens Cluster. The name pays tribute to renowned Olympic sprinter, beacon for racial equality and youth advocate James C. “Jesse” Owens.
The Owens Cluster, which increased the center’s total computing capacity by a factor of five, provides clients with a peak performance of 1.6 petaflops, tech-speak for the ability to perform 1.6 quadrillion calculations per second. The system is powered by Dell PowerEdge servers featuring the latest family of Intel® Xeon® processors, include storage components manufactured by DDN and utilize interconnects provided by Mellanox.
In 2016, OSC staff members also tackled the installation of an entirely new storage infrastructure and a renovation of the data center suite. The Center now offers clients nearly 5.5 petabytes of disk storage, as well as new NetApp software and hardware for home directory storage. Engineers also installed Plexiglas containment walls around a couple of our clusters to improve cooling efficiency, laid new raised-floor tiles and built a viewing gallery, among a host of improvements. At the same time, OSC migrated infrastructure services to new hardware, and updated software versions. These components, not directly accessible to OSC’s users, are fundamental to the operation of the HPC systems.
Additionally, the center provides researchers with more than 115 different software packages, with about 15 of them licensed packages. Researchers can bring their own licensed software, open-source packages or in-house developed applications, as well. Among the most-used software codes this past year were VASP for atomic scale materials modeling, OpenFOAM for computational fluid dynamics, LAMMPS for molecular dynamics simulation and Python for scientific programming, scripting and data analytics.
High performance Computing & Storage
In 2016, more than 1,350 researchers across Ohio depended upon several key OSC systems:
- Dell/Intel Xeon Owens Cluster
- 23,392 compute cores and 160 GPU accelerators provide a total peak performance of 1,600 teraflops of computing power
- HP/Intel Xeon Ruby Cluster
- 4,800 compute cores and 20 GPU accelerators provide a total peak performance of 144 teraflops of computing power
- HP/Intel Xeon Oakley Cluster
- 8,304 compute cores and 128 GPU accelerators provide a total peak performance of 154 teraflops
- DDN, IBM and NetApp Spectrum Scale (GPFS) Mass Storage Environment
- Contains more than five petabytes of disk storage