Software

Performance issues with MVAPICH2 on Cardinal

We have observed that several applications built with MVAPICH2, including Quantum ESPRESSO 7.4.1, HDF5, and OpenFOAM, may experience poor performance on Cardinal. We suspect this issue could be related to the newer network devices or drivers. Since MVAPICH2 is no longer supported, we recommend switching to MVAPICH 3.0 or another MPI implementation to ensure continued performance and stability in your work.

MPI fails with UCX 1.18

After the downtime on August 19, 2025, users may encounter UCX errors such as:

UCX ERROR no active messages transport to <no debug data>: self/memory - Destination is unreachable

when running a multi-node job with intel-oneapi-mpi/2021.10.0, mvapich/3.0 or openmpi/5.0.2.

Poor performance with hybrid MPI+OpenMPI jobs and more than 4 MPI Tasks on multiple nodes

RELION versions prior to 5 may exhibit suboptimal performance in hybrid MPI+OpenMP jobs when the number of MPI tasks exceeds four across multiple nodes.

Workaround

If possible, limit the number of MPI tasks to four or fewer to achieve optimal performance. Alternatively, consider upgrading to RELION version 5 or later, as these newer releases may include optimizations and improvements that resolve this performance issue.

Docker container runtime error on desktop due to DBUS session

When running a container using the podman or docker command on a desktop system, you may encounter an error like the following:

Error: OCI runtime error: crun: sd-bus call: Process org.freedesktop.systemd1 exited with status 1: Input/output error

A similar issue has been discussed in Podman GitHub Issue #13429, and it has been concluded that this is not a Podman bug.

Pages