We will have a rolling reboot of login and compute nodes of Owens cluster starting from Monday, April 16, 2018.

Users may have been experiencing job failures on Owens cluster since April 16, 2018

Overview of File Systems

This page provides an overview of file systems at OSC. Each file system is configured differently to serve a different purpose:

  Home Directory Project Local Disk Scratch (global) Tape
Path /users/project/userID /fs/project /tmp /fs/scratch N/A
Environment Variable $HOME or ~ N/A $TMPDIR $PFSDIR N/A
Space Purpose Permanent storage Long-term storage Temporary Temporary Archive
Backed Up? Daily Daily No No Yes
Flushed No No End of job when $TMPDIR    is used End of job when $PFSDIR is used No
Visibility Login and compute nodes Login and compute nodes Compute node Login and compute nodes N/A
Quota/Allocation 500  GB  of storage and 1,000,000 files Typically 1-5  TB  of storage and 1,000,000 files Varies. Depending on node No quota N/A
Total Size 900  TB 3,400  TB Varies. Depending on system 1,000  TB 5,500  TB
Bandwidth 10  GB/S 40 to 50  GB/S Varies. Depending on system 100  GB/S 3.5  GB/S
Type NetApp  WAFL  service GPFS Varies. Depending on system GPFS LTO  tape
Service: