Advancing network features of petascale computers
Message Passing Interface (MPI) is the dominant parallel computing model on supercomputers today, including petascale systems that are capable of executing one quadrillion operations per second. MPI allows the thousands of nodes in these large clusters to “talk” with one another over high-speed, internal networks, such as
Humanic leverages OSC cycles, storage to study supercollider data
In huge tunnels below the Swiss-Franco border, the European Organization for Nuclear Research's Large Hadron Collider (LHC) is operating at half of its peak energy goal of 14 TeV or 'teraelectronvolts.' With the assistance of detectors built into the collider, physicists are searching for answers to questions about the birth of the universe, the existence of alternate dimensions and other key f
Bonakdarian shows evolutionary computation provides flexibility
A recently developed, evolutionary computation approach offers researchers an alternative approach to search for models that can best explain experimental data derived from applications such as economics. Esmail Bonakdarian, Ph.D., a Franklin University assistant professor of computing sciences and mathematics, leveraged Ohio Supercomputer Center resources to test the underlying algorithm.
Panda team designing library to raise HPC efficiency, performance
Scientific computing is credited for many of the technological breakthroughs of our generation and is used in many fields, ranging from drug discovery and aerospace to weather prediction and seismic analysis. Scientific computation often deals with very large amounts of data, and its algorithms need to compute results from mathematical models.