Owens
OSC clients can't log into globus.org with OSC credentials. We are working on it.
Please contact oschelp@osc.edu if you have further questions.
Ansys 2022R1 is available on Owens
Date:
Thursday, June 2, 2022 - 10:45am
System(s):
Ansys 2022R1 is available on Owens; usage via module load ansys/2022R1
LAMMPS Stable release 29 September 2021 with Update 3 (24 March 2022) is available
LAMMPS stable version 29Sep2021.3 has been installed on Pitzer with Intel plus MVAPICH2 and GNU plus OpenMPI and on Owens with Intel plus MVAPICH2. These are GPU enabled installations. Usage is via the module lammps/29Sep2021.3. For details on available packages, example batch scripts, and help loading an installation, use the command: "module spider lammps/29Sep2021.3".
ondemand outage
OSC is currently troubleshooting issues with ondemand.osc.edu
ondemand.osc.edu will be non-functional until we resolve the issues.
Contact oschelp@osc.edu with questions.
StarCCM 18.02.010 available on Owens
Date:
Monday, April 24, 2023 - 6:00pm
System(s):
StarCCM 18.02.010 is available on Owens
Abaqus 2022 is available on Owens
Date:
Wednesday, March 16, 2022 - 12:30pm
System(s):
Abaqus/2022 is installed and now available for use by HPC users, please visit the software page to learn more
System Downtime Mar 22, 2022
A downtime for all OSC HPC systems is scheduled from 7 a.m. to 9 p.m., Tuesday, March 22, 2022. The downtime will affect the Pitzer and Owens Clusters, web portals, and HPC file servers. Client Portal (my.osc.edu) and state-wide licenses will be available during the downtime.
In preparation for the downtime, the batch scheduler will begin holding jobs that cannot be completed before 7 a.m., March 22, 2022.
Emergency maintenance in data center Feb 10 2022
OSC will shut down significant portions of the Owens and Pitzer clusters for several hours on Thursday, Feb. 10 to conduct emergency maintenance work on power systems in the data center. Jobs running on impacted nodes will be terminated when this work begins. OSC will issue refunds as needed for any impacted jobs. Additionally, due to reduced capacity on both clusters, queued jobs will experience delays until the maintenance is complete and the scheduler has a chance to clear out any backlog. We apologize for the inconvenience.