Known issues

Unresolved known issues

Known issue with an Unresolved Resolution state is an active problem under investigation; a temporary workaround may be available.

Resolved known issues

A known issue with a Resolved (workaround) Resolution state is an ongoing problem; a permanent workaround is available which may include using different software or hardware.

A known issue with Resolved Resolution state has been corrected.

Known Issues

Title Category Resolutionsort descending Description Posted Updated
Rolling reboots on owens and pitzer starting 18 Aug 2021 Batch, Connectivity, Maintenance Resolved

We will have rolling reboots of Owens and Pitzer cluster, including login and compute nodes, starting from 9am on August 18, 2021. The rolling reboot is for urgent security updates.

The... Read more

2 years 1 month ago 2 years 2 weeks ago
warning: libhwloc.so.1 may conflict with libhwloc.so.5 Resolved

Sometimes when building MPI programs the following warning appears.  It is harmless and can be safely ignored.

ld: warning: libhwloc.so.1, needed by /usr/local/mvapich2/1.7-intel/lib/... Read more          
8 years 4 months ago 7 years 11 months ago
A bug in the trigger that sends automated emails from client portal client portal Resolved

We deployed a new version to OSC Client Portal (my.osc.edu) at 3 pm Tuesday, July 9th, which involves a bug in the trigger that sends automated emails to some OSC clients with the subject 'Your... Read more

4 years 2 months ago 4 years 2 months ago
Possible job failures due to MPI library change on Pitzer after May 20 Software Resolved

There are changes on MPI libraries on Pitzer after May 20. We will upgrade MOFED from 4.9 to 5.6 and recompile all OpenMPI and Mvapich2 against the newer MOFED version. Users with their own MPI... Read more

4 months 3 weeks ago 4 months 3 weeks ago
Rolling reboot of compute and login nodes of all clusters, starting from Wednesday morning, March 22, 2017 login, Owens, Ruby Resolved

4:56PM 3/28/2017 Update: The rolling reboots of all systems are completed. 

All compute nodes and login nodes of Owens, Oakley, and Ruby clusters will need to be rebooted... Read more

6 years 6 months ago 6 years 6 months ago
Problems with the home directories filesystem Resolved

 We are currently seeing problems with the home directories at OSC's HPC facility.... Read more

3 years 4 months ago 3 years 4 months ago
Account changes temporarily suspended Account Management Resolved

We are still experiencing some account problems related to Thursday's issue. As a result, we have taken my.osc.edu offline and cannot process email changes or password resets, either via self-... Read more

9 years 3 months ago 9 years 3 months ago
Stale File Handles on GPFS clients filesystem Resolved

OSC is experiencing some problems with the Project and Scratch filesystems that are resulting in some jobs seeing "stale file handles". We are investigating the problem and will provide updates as... Read more

4 years 9 months ago 4 years 9 months ago
starccm/15.02.007 with intelmpi after Mar 22, 2022 Resolved
(workaround)

STAR-CCM+ 15.02.007 and 15.02.007-mixed with intelMPI would fail on multiple node jobs after the downtime on Mar 22, 2022. Please use openmpi instead. You can find more... Read more

1 year 6 months ago 1 year 6 months ago
Problems with MVAPICH2 Owens, Ruby, Software Resolved

Some MVAPICH2 MPI installations on Oakley, Ruby, and Owens, such as the default module mvapich2/2.2 as well as mvapich2/2.1, appear to have a bug that is triggered by certain programs.  The... Read more

7 years 7 months ago 1 year 4 months ago

Pages