Jobs on Ruby fail to be scheduled since 22:40 March 1, 2020. We are investigating the problem now. Please contact email@example.com if you have any questions
OSC OnDemand is not responsive now. We are investigating the problem now. For more information and updates, see: https://www.osc.edu/resources/technical_support/known_issues/osc_ondemand_is_not_responsive
Maintenance work on the GPFS servers is scheduled to be performed today, 28 Feb 2020 at 2:00p.m. Although there is no direct impact expected to services at OSC, there may be short interruptions to storage services. Please contact OSC Help at firstname.lastname@example.org if you have any questions.
The GUI when using CILogon has been updated with a new look. See https://groups.google.com/a/cilogon.org/forum/#!topic/outages/aTHT9_DGYqk for details.
In March 2020, OSC expanded the existing project and scratch storage filesystems by 8.6 petabytes. Adding the existing storage capacity at OSC, this brings the total storage capacity of OSC to ~14 petabytes.
A downtime for all HPC systems is scheduled from 7 a.m. to 5 p.m., Tuesday, January 7, 2020. The downtime will affect the Pitzer, Ruby and Owens Clusters, web portals, and HPC file servers. System access to storage and license servers hosted by OSC will not be available during this time. Login services, except for my.osc.edu, will not be available during this time.
OSU's security framework requires that OSC conduct monthly security scans of compute nodes. We started the scans during September of 2019, and noticed some job failures correlated with the security scans. User jobs running during December 10th, 11th, and 12th might have been interrupted by scheduled scans.
On Wednesday, December 11 from 9am-915am there will be a short interruption of service for OnDemand to deploy a new version of the portal. It will deploy some updates to the web apps and infrastructure. **During this interruption any active shell connections, noVNC connections, and file transfers will be terminated. Afterwards one should be able to reconnect to running VDI and iHPC sessions.** Contact email@example.com if there are any questions.
We are having issues with GPFS filesystem (both project and scratch) on all clusters. It started around 3:08 PM, Oct 24, 2019. This also causes the inaccessibility of OSC OnDemand. We are not clear whether there are any job failures due to this problem. We are working to resolve the issue now. We will keep you posted and apologize for any inconvenience. Please contact firstname.lastname@example.org if you have any questions.