In the near future, the world’s fastest supercomputers will incorporate millions of processing elements, a substantial increase in scale over the high performance computing systems in use at leading research centers today. At the same time, however, the rates at which users can access data storage devices, such as hard disks, are not increasing at the same rate.
In fact, the overall ability of file systems to input and output data within these high performance computers is not keeping pace with the increases in raw compute power. Even commercial file systems used on the largest cluster computers — designed for competitiveness in the larger business market — are being stretched to address the demands of the most powerful systems.
Research scientists at the Ohio Supercomputer Center are part of a team researching this issue for the Department of Energy, which, incidentally, owns and operates several of the world’s most powerful supercomputers.
“A comprehensive software solution is needed to bridge the gap between processing trends and I/O systems so that leadership-class machines can most efficiently leverage the available storage resources,” the DOE grant proposal states.
The team will create a software package that will operate on the IBM Blue Gene, Cray XT, Roadrunner and Linux cluster platforms and function on a variety of file systems. The package will be designed as open-source software and be available online.
Project leads: Rob Ross, Argonne National Laboratory, & James Nunez, Los Alamos National Laboratory
Research title: Common HEC I/O forwarding scalability layer
Funding source: U.S. Department of Energy