On July 12th, 2016 OSC migrated its old GPFS and Lustre filesystems to new Project and Scratch services, respectively. We've moved 1.22 PB of data, and the new capacities are 3.4 PB for Project, and 1.1 PB for Scratch. If you store data on these services, there are a few important details to note.
OSCprojects is a command developed at OSC for use on OSC's systems and is used to view your logged in accounts project information.
OSCgetent is a command developed at OSC for use on OSC's systems and is similar to the standard getent command. It lets one view group information.
OSCusage is command developed at OSC for use on OSC's systems. It allows for a user to see information on their project's current RU balance, including which users and jobs incurred what charges.
OSCfinger is a command developed at OSC for use on OSC's systems and is similar to the standard finger command. It allows various account information to be viewed.
This page includes a summary of differences to keep in mind when migrating jobs from other clusters to Pitzer.
The Owens and Pitzer clusters have access to the DDN Infinite Memory Engine (IME), a fast data tier between the compute nodes and the
/fs/scratch file system. IME is a Solid State Disk (SSD) that can act as a cache and burst buffer to improve the performance of the scratch file system.
While some jobs will benefit from using IME, others will not. You need to understand the workflow of IME well, since sometimes it is not very intuitive. Files must be explicitly imported, synchronized and/or purged.
Preemption job is available in general using a new QOS: preemptible.
Jobs that request this QOS will be eligible to be run on reserved condo nodes. However, if other jobs with the appropriate QOS are waiting on those same resources, the preemptible jobs are killed to allow the higher QOS jobs start. Preemption will be done with a minimum runtime before preemption of 15 minutes. Preemptible jobs will be charged at the same rate.
Here are the queues available on Pitzer. Please note that you will be routed to the appropriate queue based on your walltime and job size request.