Benchmark study examines results of leading high-level languages

In the science and engineering community, three computer-programming languages, MATLAB, Mathematica and Python, are among the most popular. Called high-level languages, they let researchers focus on solving problems by cloaking the basic, yet necessary, coding that computers require. Each language also has extensions for users to access remote high performance computing systems — without sacrificing their desktop environment and the productivity that comes with it.

But is one any better than the other? Ohio Supercomputer Center researchers are currently evaluating each computing solution against four HPC Challenge benchmarks: STREAM, FFT, Top500 and RandomAccess.

“By testing the benchmarks on an OSC research cluster, we can control the configuration and system load to ensure as objective a comparison as possible. We also examine code complexity and solution time,” said Alan Chalker, Ph.D., program director of computational science engineering research applications at OSC. “We’ll then conduct a sampling of these test runs on Department of Defense Major Shared Resource Centers supercomputers to validate the OSC-based results.” 

The underlying computational analysis behind each benchmark offers a unique evaluation of whether a particular solution offers advantages in terms of performance, memory use or code complexity. These results ultimately benefit the work of researchers in all branches of the military, as many use high-level languages.


Project lead: Alan Chalker, Ph.D., OSC

Research title: Benchmarking of parallel high-level languages

Funding source: Department of Defense High Performance Computing Modernization Program