COLUMBUS, Ohio, August 22, 2006 — The Ohio Supercomputer Center (OSC) was awarded $520,000 last week from the National Science Foundation (NSF) to help develop faster and smarter storage systems to improve and increase the processing power of high performance computing (HPC).
The three-year grant is part of the High-End Computing University Research Activity (HECURA) program, a national effort that promotes and funds research and education projects involving storage and retrieval of data in large-scale computing systems.
If successful, the grant will result in computers that can use data and produce results faster. Consequently, problems that hinge on the use of massive amounts of data, such as energy exploration, environmental modeling and patient safety will be easier to solve.
Headed by Dr. Pete Wyckoff, OSC research scientist, the study develops software tools that delegate more tasks to storage devices, thus increasing the processing capacity of supercomputers. Dennis Dalessandro and Ananth Devulapalli, OSC networking researchers, and several graduate and undergraduate students will also focus on improvements in performance, scalability, functionality and management.
Wyckoff said that continued improvements in modern HPC systems could now address previously intractable applications. However, observation- and simulation-driven applications require high throughput input/output (I/O) capabilities, large data storage capacities, and tools for efficiently finding, processing, organizing and moving data.
As the main provider of computational resources to Ohio’s higher education community, OSC has access to a broad set of applications and can draw from a large pool of institutions throughout the state in order to analyze and characterize storage usage patterns. This approach helps create new ways of thinking about performance, scalability, security, data transport and management, caching, and a host of other issues surrounding the mainframe/CPU and the storage/peripheral relationship.
With continued improvements in processing speeds and disk densities in HPC, the most fundamental advances come from changing the ways the central processing units interact with the storage peripherals.
“The next major step is to write specialized software and algorithms that can be used for re-designing, simulating, benchmarking, and measuring performance of newly developed storage devices,” said Wyckoff.
“The problem with storage devices is that they have very little insight into what applications are doing with the data, and hence, cannot make the most effective decisions about how to store and retrieve data. The CPU tells a storage device everything to do at a very low level,” Wyckoff added. “What we’re tying to do with this research is raise storage up to be a first-class member of the computing infrastructure, so that storage devices will be intelligent, and the CPUs can treat them as peers instead of subordinates. This will make the entire system run faster and better.”