Known issues

Unresolved known issues

Known issue with an Unresolved Resolution state is an active problem under investigation; a temporary workaround may be available.

Resolved known issues

A known issue with a Resolved (workaround) Resolution state is an ongoing problem; a permanent workaround is available which may include using different software or hardware.

A known issue with Resolved Resolution state has been corrected.

Known Issues

Titlesort ascending Category Resolution Description Posted Updated
HCOLL-related failures in OpenMPI applications Cardinal, Software Resolved
(workaround)

Several applications using OpenMPI, including HDF5, Boost, Rmpi, ORCA, and CP2K, may fail with errors such as

mca_coll_hcoll_module_enable() coll_hcol: mca_coll_hcoll_save_coll_handlers... Read more          
5 months 6 days ago 5 months 5 days ago
Handling full-node MPI warnings with MVAPICH 3.0 Ascend, Cardinal Resolved
(workaround)

When running a full-node MPI job with MVAPICH 3.0 , you may encounter the following warning message:

[][mvp_generate_implicit_cpu_mapping] WARNING: You appear to be running at full... Read more          
4 months 3 weeks ago 4 months 3 weeks ago
Group membership discrepancies Account Management, client portal Resolved

Group changes may not always propagate through to our HPC Systems, although they show in the Client Portal (my.osc.edu). 

Issue: if you are added to a project that is still in a REQUESTED... Read more

5 years 8 months ago 5 years 6 months ago
Grafana is not available since Dec 14 Downtime Resolved

Grafana (grafana.osc.edu) is not available since Dec 14 Downtime. We will fix it soon.

If you have any questions, please contact OSC Help

3 years 3 months ago 3 years 3 months ago
GPFS problems with /fs/project and possibly /fs/scratch filesystem Resolved

There was an issue with GPFS clients that affected /fs/project and possibly /fs/scratch between around 3:30AM and 8:30AM on Sunday September 4th. Some jobs from clients were also impacted. 

... Read more
2 years 6 months ago 2 years 6 months ago
GPFS problems on Owens filesystem Resolved

Owens is experiencing a disruption of GPFS availability. At about 4:17PM today (January 6th), OSC monitoring noticed a problem with mounts of Project on the Owens supercomputer. Jobs may have been... Read more

5 years 2 months ago 5 years 2 months ago
GPFS hang Issue on 09/08/2016 filesystem Resolved

On Thursday, Sept 8 starting at 19:37, we had some bad interaction that appears to have been caused by the backup client, and the GPFS servers. This resulted in a GPFS hang that propagated I/O... Read more

8 years 6 months ago 8 years 6 months ago
GPFS filesystem Problem on Oct 24 2019 filesystem Resolved

Updated on 4:45 PM Oct 24, 2019

The issue is fixed. GPFS filesystems and OnDemand are back. 

Original Post

We are having issues with GPFS filesystem... Read more

5 years 5 months ago 5 years 5 months ago
GPFS filesystem Errors on June 4 2019 filesystem Resolved

Update Posted on 04 June 2019 12:27 PM

We fixed the problem with both project and scratch filesystem and the service has been restored. Please contact ... Read more

5 years 9 months ago 5 years 9 months ago
GPFS errors on compute nodes filesystem Resolved

We've seen an increase in transient problems that result in compute nodes losing access to the GPFS file systems for ~5 minutes.

Any jobs running on these nodes accessing files on GPFS may... Read more

4 years 3 months ago 3 years 3 months ago

Pages