Rolling reboot of Owens and Pitzer starting from July 11, 2022
We will have rolling reboots of Owens and Pitzer clusters including login and compute nodes, starting from 9AM Monday, July 11 2022.
We will have rolling reboots of Owens and Pitzer clusters including login and compute nodes, starting from 9AM Monday, July 11 2022.
Updates on Feb 25 2022:
This issue is fixed.
Original Post:
Users may see an issue of missing shared library with some mvapich2 modules on Pitzer and Owens. The error is like
<path_to_executable>: error while loading shared libraries: libim_client.so.0: cannot open shared object file: No such file or directory
We are in the process of rebuilding mvapich2 versions that are affected.
OSC will shut down significant portions of the Owens and Pitzer clusters for several hours this afternoon (Thursday, Feb. 10).
You might encounter an error while pulling a large Docker image:
ERROR: toomanyrequests: Too Many Requests.
or
We found mpiexec
/mpirun
from OpenMPI can not be used in an interactive session (launched by sinteractive
) after upgrading Pitzer and Owens to Slurm 20.11.4. Please use srun
only while you use OpenMPI in an interactive session.
Updated on Feb 25:
StarCCM license outage is restored.
Original post:
OSC's starccm software license will expire at 12 a.m., Sunday, Feb 21, 2021, making the software unavailable until the license is renewed.
Updated on March 2:
This is completed.
Original Post:
We will have rolling reboots of Owens cluster including login and compute nodes, starting from 9AM Feb 18, 2021. The rolling reboot is to update BIOS for urgent security updates. The rolling reboots won't affect any running jobs, but users may experience longer queue wait time than usual on the cluster. User will also expect a ~10 minute outage of login nodes during the reboot of login nodes.
A partial-node MPI job may fail to start using mpiexec
from intelmpi/2019.3
and intelmpi/2019.7
with error messages like
OSC is currently experiencing problems with its internal network. Interactive sessions may be slow or unresponsive, but running jobs should not be affected.
Users would encounter a MPI job failed with openmpi/3.1.0-hpcx
on Owens and Pitzer. The job would stop with the error like "There are not enough slots available in the system to satisfy the slots". Please switch to openmpi/3.1.4-hpcx
. The buggy version openmpi/3.1.0-hpcx
will be removed on August 18 2020.
==========
Resolved: We removed openmpi/3.1.0-hpcx
on August 18 2020.