Job Viewer

The Job Viewer Tab displays information about individual HPC jobs and includes a search interface that allows jobs to be selected based on a wide range of filters:

1. Click on the Job Viewer tab near the top of the page.

2. Click Search in the top left-hand corner of the page

screenshot of the XDMoD displaying the above text


Please be aware that the GPU and large memory resources on Owens and Pitzer are very busy and that can lead to long queue wait times. If you do not need these resources for your jobs, using the standard compute nodes will ensure your jobs start sooner and the scarcer resources are available for those whose jobs require them. If you are unsure whether you need these resources for your work, please contact us at

XDMoD Tool

XDMoD Overview

XDMoD, which stands for XD Metrics on Demand, is an NSF-funded open source tool that provides a wide range of metrics pertaining to resource utilization and performance of high-performance computing (HPC) resources, and the impact these resources have in terms of scholarship and research.

How to log in

Visit OSC's XDMoD ( and click 'Sign In' in the upper left corner of the page.


A downtime for all HPC systems is scheduled from 7 a.m. to 5 p.m., Tuesday, Feb 5, 2019. The downtime will affect the Pitzer, Ruby and Owens Clusters, web portals and HPC file servers. Login services, including, and access to storage will not be available during this time. In preparation for the downtime, the batch scheduler will begin holding jobs that cannot be completed before 7 a.m., Feb. 5, 2019.


XFdtd is an electromagnetic simulation solver. Its features analyze problems in antenna design and placement, biomedical and SAR, EMI/EMC, microwave devices, radar and scattering, automotive radar, and more.

Availability and Restrictions


The following versions of XFdtd are available on OSC clusters:

Pitzer Production Deployment December 4

Pitzer, OSC's latest cluster, will be deployed to full production status on Tuesday, December 4. All users will have access to the cluster and will be able to submit jobs. For details on how to modify your jobs to run on Pitzer, please see For general information about the new cluster, please visit and see our Cluster Computing pages. If you have any questions, please contact OSC help

Services have been restored after switch failure

At about 1:50 am on November 14th, 4:05 am on November 17th, and 5:00 am on November 18th, OSC experienced three separate major switch failures. We restored all the services after each outage, and have completed the update to the NetApp appliance that provides the home directory service to address a separate bug triggered by the outage. We are still working with the vendor for the network switches on a permanent resolution to the bug that has caused these interruptions. We will continue to keep you informed.

Switch failure on Nov 17 2018

At about 4:05 am on November 17th, OSC experienced a major switch failure which resulted in the home directory service and GPFS file systems being disrupted. Most services were back up around 10 am, but some users may still be seeing stale file handles on GPFS. We are still working on recovering GPFS clients. For more updates, see:

Reboot of NetApp as part of an upgrade on November 19

We will have a reboot of the NetApp as part of an upgrade, starting from 9:30 AM on Monday, November 19, 2018, to address a bug that causes NetApp issues caused by the network switch outage we had on Nov 14, 2018. Any cluster nodes, OnDemand service, and all filesystems won't be impacted by the reboot. We also do not expect any disruptions to users' jobs due to this reboot.