Known issues

Unresolved known issues

Known issue with an Unresolved Resolution state is an active problem under investigation; a temporary workaround may be available.

Resolved known issues

A known issue with a Resolved (workaround) Resolution state is an ongoing problem; a permanent workaround is available which may include using different software or hardware.

A known issue with Resolved Resolution state has been corrected.

Known Issues

Title Category Resolutionsort descending Description Posted Updated
Security Vulnerability for GPFS filesystem Resolved

Update: The fix was deployed during May 19 Downtime. 

Clients are not able to use mm* commands to manipulate GPFS ACLs on most OSC systems, due to a security vulnerability... Read more

5 years 6 months ago 5 years 5 months ago
Handling full-node MPI warnings with MVAPICH 3.0 Ascend, Cardinal Resolved
(workaround)

When running a full-node MPI job with MVAPICH 3.0 , you may encounter the following warning message:

[][mvp_generate_implicit_cpu_mapping] WARNING: You appear to be running at full... Read more          
1 year 1 week ago 6 months 1 week ago
MVAPICH broken on Ruby Ruby Resolved

Update Monday February 16th -- Ruby MVAPICH2 build fixed.

Ruby's MVAPICH2 build has been fixed.  Please email oschelp@osc.edu with any issues.

... Read more
10 years 9 months ago 10 years 8 months ago
Application Errors client portal Resolved

When beginning a major or discovery-level application for resources at OSC, you are asked for a required justification on the Additional Documents page. However, there is no mechanism for you to... Read more

6 years 6 months ago 6 years 3 months ago
/fs/ess and OnDemand not accessible filesystem, OnDemand Resolved

/fs/ess and OnDemand are not accessible now. We are working on this.

Sorry for the inconvenience. Please contact OSC Help if you have any questions. 

4 years 3 months ago 4 years 3 months ago
Segmentation fault from openmpi/1.10-hpcx and 2.0-hpcx on Owens Owens, Software Resolved

We have found that recent MPI jobs using openmpi/1.10-hpcx and openmpi/2.0-hpcx on Owens may complete or hang until the job is killed, but receive segmentation fault. Some applications might be ... Read more

6 years 3 months ago 6 years 2 months ago
MyOSC budget balance may not be correct client portal Resolved

Resolved

Version 3.0.1 was deployed which patches this issue. View the changelog for details.

Original post

... Read more

3 years 8 months ago 3 years 1 month ago
Rolling reboot of all clusters, starting from Wednesday morning, April 19, 2017 Batch, Maintenance, Owens, Ruby Resolved

1:40PM 4/27/2017 Update: Rolling reboots are completed. 

3:10PM 4/18/2017 Update: Rolling reboots on Owens have started to address GPFS errors occured... Read more

8 years 6 months ago 8 years 6 months ago
Email Issues client portal Resolved

OSU is having ongoing periodic problems with Microsoft (their mail hosting provider) severely delaying outbound email. There is no solution being offered and no timeline for getting it resolved.... Read more

6 years 3 weeks ago 5 years 9 months ago
Slurm database repair on 01/25/2024 Outage Resolved

We have scheduled a Slurm database repair, which is planned to start at 8:30 am US/Eastern on Thursday, January 25, 2024. During the repair, Slurm database will be offline; running jobs and... Read more

1 year 9 months ago 1 year 9 months ago

Pages