Ohio Supercomputer Center Hosts Jack Dongarra, Renowned High Performance Computing Expert

COLUMBUS, Ohio (Jan 17, 2007) — 

Click here to view the streaming video from this event. (You will need Windows Media Player.)

Jack Dongarra, internationally-known expert in high performance computing (HPC), recently spoke at a lecture series sponsored by the Ohio Supercomputer Center (OSC) on Jan. 11, 2007. In his speech, “Supercomputers & Clusters & Grids, Oh My!” Dongarra addressed current trends, rapid changes, and some of the biggest challenges facing the HPC world.

Dongarra is a University Distinguished Professor of the University of Tennessee, Distinguished Research Staff of the Oak Ridge National Laboratory, Director of the Innovative Computing Laboratory, and Director of the Center for Information Technology Research. The event was hosted by OSC’s Statewide Users Group and Ralph Regula School of Computational Science.

Taking the audience on a trip down memory lane, Dongarra reflected on the 1950s when supercomputing meant 1,000 floating points of operation (FLOPS) or one kiloflop. By the ‘60s, supercomputing had reached one million FLOPS, and by the ‘70s, it was driven by raw performance. Massive parallel systems changed things in the ‘90s due to their better price/performance ratios, according to Dongarra. With the ‘90s also came computing teraflop ranges with trillions of operations. Now, fast forward to the early 2000s, when HPC consists of commodity-based clusters with approaching petaflop level performance.

“Within the first half of this decade, clusters have become the prevalent architecture for many HPC application areas on all ranges of performance,” said Dongarra. “Today, supercomputing is approaching a petascale level, and I predict it will reach 10 to 15 petaflops in just one to two years.”

Dongarra used the TOP500 list, an esteemed ranking of the most powerful supercomputers in the world, to illustrate how quickly things change in the HPC world. For instance, all of the machines from 1994 are equivalent in power to the one at the bottom on the list today. And just over 10 years later, the top machine in 1994 is no where to be found on the November 2006 list.

“We use to believe that the power of computers would double every 18 months,” said Dongarra, “ but parallel processing has changed that. The slope is increasing faster today, and now we can expect computing power to change about every 14 months, but even those predictions are difficult.”

“Just look at a laptop,” he added, chuckling. “A few years ago, I would have laughed if someone had told me my laptop would have a one gigaflop capacity. At today’s pace, where is it headed?”

The Department of Energy’s (DOE) Lawrence Livermore National Laboratory hosts the largest and number one machine on the TOP500 list. It is the BlueGene/L System, a joint development of IBM and DOE’s National Nuclear Security Administration, boasting a Linpack performance of 280.6 teraflops (trillions of calculations per second, or Tflop/s).

In the number two spot is Sandia National Laboratories’ Cray Red Storm supercomputer, only the second system ever to be recorded to exceed the 100 Tflops/s mark with 101.4 Tflops/s. The IBM eServer Blue Gene Solution system, installed at IBM’s Thomas Watson Research Center with 91.20 Tflops/s Linpack performance, is ranked third. All of the machines ranking in the list’s top 10 average more than 10,000 processors.

Dongarra said that vendors such as Cray, IBM, and SUN are working to build a petaflop system by the end of the decade, with a handful of Japanese, Chinese and French vendors hot on their heels. The Defense Advanced Research Projects Agency’s (DARPA) goal, Dongarra added, is to have a petaflop system by 2009. Oak Ridge National Labs hopes to reach petaflop status by the fourth quarter of 2008.

“If every person in the world is doing operations, it would take 42,000 seconds to equal what this machine is doing in one second,” said Dongarra, explaining the power of the BlueGene/L and other petascale systems. “That’s just how much computing power we’re talking about here.”

Other issues Dongarra addressed included computer chip capacity, Grid computing, and even Sony’s PlayStation®3 system, comparing its 32-bit floating point (FP) precision with the 64-bit FP precision used in supercomputing. He predicts petaflop class systems will be here in the next two years and a need for heterogeneous hybrid, mixed precision for speed and delivery of full precision accuracy, new languages, and self-adaptivity in software and algorithms.

“Supercomputers play a critical role and bring a sizable economic benefit,” said Dongarra. “In situations like Hurricane Katrina, for instance, modeling simulations done with supercomputers to help map out evacuation plans could have a tremendous economic impact when you consider that it costs $1 million for every mile you have to evacuate in an area.”

Several other examples Dongarra gave that are directly impacted by supercomputing included the oil and automobile industries, defense, airlines, science, medical discovery, and even product design with Proctor & Gamble’s Pringles potato chips.

But the real challenge facing HPC, he said, is software. Applications and software are usually much more long-lived than hardware and the ability to improve them is timely and expensive. While most people focus on the expense of hardware, Dongarra said software is a major cost component of modern technologies.

“What we need is a long-term balanced investment in the HPC ecosystem: hardware, software, algorithms, and applications,” added Dongarra.

And as for his laptop, Dongarra speculates a teraflop capacity by 2015.

About OSC
Celebrating 20 years of service, the Ohio Supercomputer Center (OSC) is a catalytic partner of Ohio universities and industries that enables Ohio to compete for international, federal, and state funding, focusing on new research and business opportunities. It provides a reliable high performance computing and high performance networking infrastructure for a diverse statewide/regional community including education, academic research, industry, and state government. OSC promotes and stimulates computational research and education in order to act as a key enabler for the state's aspirations in advanced technology, information systems, and advanced industries. For additional information, visit http://www.osc.edu.

Subjects: