Software

Performance issues with MVAPICH2 on Cardinal

We have observed that several applications built with MVAPICH2, including Quantum ESPRESSO 7.4.1, HDF5, and OpenFOAM, may experience poor performance on Cardinal. We suspect this issue could be related to the newer network devices or drivers. Since MVAPICH2 is no longer supported, we recommend switching to MVAPICH 3.0 or another MPI implementation to ensure continued performance and stability in your work.

MPI fails with UCX 1.18

After the downtime on August 19, 2025, users may encounter UCX errors such as:

UCX ERROR no active messages transport to <no debug data>: self/memory - Destination is unreachable

when running a multi-node job with intel-oneapi-mpi/2021.10.0, mvapich/3.0 or openmpi/5.0.2.

Pages