Processing the sounds of a battlefield to evaluate targets

The sounds of war, when accurately captured and processed, shed their cacophonous echoes and leave a trail of unique, acoustical fingerprints. Much like sonar detects and classifies underwater resonance, acoustic signal processing sensors capture sounds in the air.

By using an array of sensors, military personnel can collect the distinct auditory signatures of combat vehicles and use that information to identify and track specific targets. This type of network, though, depends on establishing the sensors’ locations through triangulation of the equipment. Environmental factors such as wind, hills or air temperature can affect this process, called self-localization.

Establishing the location requires applying a number of different algorithms to the data, which, as the number of environmental factors increase, takes correspondingly more time to process. Researchers at the Ohio Supercomputer Center, in partnership with the Army Research Laboratory, ramped up the analysis phase by refining several parallel processing algorithms. Parallel processing segregates calculations across multiple computer nodes.

“We tested various parallel processing technologies on a sample data set of 63 audio files and found ways to tweak the programs’ codes for better, quicker results,” said Ashok Krishnamurthy, Ph.D., senior director of research at OSC. “The faster researchers can process the sounds in any given area, the faster military leaders can make critical decisions about their course of action.”


Project lead: Ashok Krishnamurthy, Ph.D., OSC

Research title: Object detection, localization & tracking using multiple sensors

Funding source: Army Research Laboratory