cp2k/2023.2 can produce huge output containing MKL messages
On all clusters the cp2k executables from module cp2k/2023.2 can produce huge output files due to many many repeating errors from MKL, e.g.:
On all clusters the cp2k executables from module cp2k/2023.2 can produce huge output files due to many many repeating errors from MKL, e.g.:
MKL module files define some helper environment variables with incorrect paths. This can yield link time errors. All three clusters are affected. We are working to correct the module files. A workaround for users is to redefine the environment variable with the correct path; this requires some computational maturity. We recommend users contact oschelp@osc.edu for assistance. An example error from Cardinal with module intel-oneapi-mkl/2023.2.0 that defined environment variable MKL_LIBS_INT64 follows:
You may encounter errors that look similar to these when running STAR 2.7.10b:
STAR: bgzf.c:158: bgzf_open: Assertion `compressBound(0xff00) < 0x10000' failed.
It seems to be related to this issue: https://github.com/alexdobin/STAR/issues/2063
STAR bundles an older version of HTSlib which is incompatible with zlib-ng, a library we build STAR with
Use star/2.7.11b
OSC will remove the default MATLAB Jupyter Kernel on Tuesday, May 20th, 2025. To create your own Jupyter MATLAB Kernel please follow the documentation on the MATLAB Page
A pure MPI application using mpirun
or mpiexec
with more ranks than the number of NUMA nodes may encounter an error similar to the following:
Cardinal hosted a version of bwa that had an unpatched vulnerability, 0.7.17.
This version has been removed from Cardinal in favor of 0.7.18
You may encounter the following error while running mpp-dyna jobs with multiple nodes:
You may encounter the following error while running Ansys on Cardinal:
OMP: Error #100: Fatal system error detected. OMP: System error #22: Invalid argument forrtl: error (76): Abort trap signal
Set the environment variable KMP_AFFINITY=disabled
before running Ansys
You may encounter the following error while running an Abaqus parallel job with PMPI:
When running a full-node MPI job with MVAPICH 3.0 , you may encounter the following warning message: