Modern high performance computing systems allow scientists and engineers to tackle grand challenge problems in numerous fields, such as astrophysics, earthquake analysis, weather prediction, nanoscience modeling and biological computations. In concert with the many use cases, the field of computer architecture, interconnection networks and system design is undergoing rapid change. Advances in computing systems are coming mainly in the form of increased parallelism using multi-core processors, many-core accelerators and improved communication interfaces.
Dhabaleswar K. (DK) Panda, Ph.D., professor of computer science and engineering at The Ohio State University, leads a research group focused on designing better communication runtimes to take advantage of new network features so that applications can be developed and implemented using scalable, high performance MPI (message passing interface) or hybrid MPI models.
“Communication runtimes must also evolve to support emerging architecture trends and programming models. For example, it is widely believed that a hybrid programming model is optimal for many scientific computing problems, especially for exascale computing,” said Panda. “Our library, MVAPICH2-X, is the first to provide a unified high-performance runtime that supports both MPI and PGAS programming models. This minimizes the development overheads that have been a substantial deterrent in porting MPI applications to PGAS models. The unified runtime also delivers superior performance compared to using separate MPI and PGAS libraries by optimizing use of both network and memory.”
Panda’s research team uses Ohio Supercomputer Center systems to test the performance of new communication algorithms, runtime designs and application evaluations. For a recent study, a team member reserved about a third of the more-than 8,300 nodes on OSC’s Oakley cluster for runs of a redesigned HPL-benchmark, which can more accurately measure system performance for mixed CPU/GPU node systems. This was made possible through the close collaboration between Panda’s team and OSC staff members Karen Tomko, Ph.D., and Doug Johnson.
The MVAPICH2 and MVAPICH2-X software packages, developed by his research group, are currently being used by more than 2,055 organizations in 70 countries. Panda held the first MVAPICH User Group meeting at OSC in August.
Panda and his students are also involved in designing a high performance and scalable version of Hadoop for Big Data to exploit remote direct memory access (RDMA) technology, as provided by InfiniBand on modern clusters. The Hadoop-RDMA version is publicly available and includes RDMA-based design for multiple components of Hadoop.
Project Lead: Dhabaleswar Panda, The Ohio State University
Research Title: Research in communication runtimes for emerging HPC systems
Funding Source: National Science Foundation, Department of Energy