Storage Documentation

Home directory

Policy

Please revew the OSC Home storage policy in our Policy page.

Usage

Each user ID has a home directory on the NetApp WAFL service. You have the same home directory regardless of what system you’re on, including all login nodes and all compute nodes, so your files are accessible everywhere. Most of your work in the login environment will be done in your home directory.

A user's home directory is located at /users/<primary-project-code>/<username>. The primary project code is determined by the first project a user account is added to, however this is only a naming convention and does not imply that said project has any rights over a user's home dir.

The permissions for a user's home dir are by default only allowing that user to read their files/dirs, but this can be changed if needed. Another side effect of the first project is that a user's primary linux group will be that project as well. This means that files/dirs created by the user will, by default, have group ownership of the first project.

The environment variable $HOME is the absolute path to your home directory. You should use $HOME or ~/ instead of absolute paths to refer to your home directory wherever possible.

Each user has a quota of 500 GB (gigabytes) of storage and 1,000,000 files. This quota cannot be increased. If you have many small files, you may reach the file limit before you reach the storage limit. In this case we encourage you to tar or zip your files or directories, creating an archive. If you approach your storage limit, you should delete any unneeded files and consider compressing your files using bzip or gzip. You can archive/unarchive/compress/uncompress your files inside a batch script, using scratch storage (see scratch storage quota limits below) so your files are still conveniently usable. As always, contact OSC Help if you need assistance.

Home directories are considered permanent storage. Accounts that have been inactive for 18 months may be archived, but otherwise there is no automatic deletion of files.

All files in the home directories are backed up daily. Two copies of files in the home directories are written to tape in the tape library.

Note: OSC does not back up core dump files. These files are identified by the core.* pattern. Any data stored in files beginning with core. will be mistaken for core dump files and not backed up.

Access to home directories is relatively slow compared to local or parallel file systems. Batch jobs should not perform heavy I/O in the home directory tree because it will slow down your job. Instead you should copy your files to fast local storage and run your program there.

Project storage

Policy

Please revew the OSC Project storage policy in our Policy page.

How to get project space

For groups that require more than the 500GB storage and/or more than 1,000,000 files available in individual home directories, or need a durable location for multiple group members to store data, additional 'project' storage space is available. Principal Investigators can log into MyOSC or contact OSC Help to request additional storage on this service, outside the home directory.

Please see section storage request under the creating projects and budgets for details on how to request project storage.

Location

Project directories are created on the Project filesystem. The absolute path to the project directory for project PRJ0123 will be /fs/ess/PRJ0123

Usage

The quota on the project space is shared by all members of the project. 

The permissions for a project directory are by default allowing read and write access by all members of the group, with editing/deletion restricted to the file owner, as well as the project directory owner (which is usually PI but can be designated person by PI). All files/dirs created in the project directory will, by default, have group ownership of the project, and can be read by all members of the group. 

See managing posix acls for a guide on setting up permissions for project space.

All files in the project directories are backed up daily. Two copies of files in the project directories are written to tape in the tape library.

Note: OSC does not back up core dump files. These files are identified by the core.* pattern. Any data stored in files beginning with core. will be mistaken for core dump files and not backed up.

The recommendations for archiving and compressing files are the same for project directories as for home directories.

Filesystem performance is better than home directories, but for certain workloads, scratch space local to the compute nodes will be a better choice.

Billing

As of July 1, 2020, there have been updates to OSC academic fee structure to begin billing project storage quotas at OSC. See the academic fee structure FAQ  for details.

Local node storage

Each compute node has a local disk used for scratch storage. This space is not shared with any other system or node.

The batch system creates a temporary directory for each job on each node assigned to the job. The absolute path to this directory is in the environment variable $TMPDIR. The directory exists only for the duration of the job; it is automatically deleted by the batch system when the job ends. Temporary directories are not backed up.

$TMPDIR is a large area where users may execute codes that produce large intermediate files. Local storage has the highest performance of any of the file systems because data does not have to be sent across the network and handled by a file server. Typical usage is to copy input files, and possibly executable files, to $TMPDIR at the beginning of the job and copy output files to permanent storage at the end of the job. See the batch processing documentation for more information. This area is used for spool space for stdout and stderr from batch jobs as well as for $TMPDIR.  If your job requests less than the entire node, you will be sharing this space with other jobs, although each job has a unique directory in $TMPDIR.

Please use $TMPDIRand not /tmp on the compute nodes to ensure proper cleanup.

The login nodes have local scratch space in /tmp. This area is not backed up, and the system removes files last accessed more than 24 hours previously.

Scratch storage

Policy

Please review the OSC Scratch storage policy in our Policy page.

Location

OSC provides a parallel file system for use as high-performance, high-capacity, shared temporary space. The scratch service is visible from all OSC HPC systems and all compute nodes at /fs/scratch . It can be used as either batch-managed scratch space or as a user-managed temporary space.

Quota

Each user has a quota of 100 TB (terabytes) of storage and 25,000,000 files. 

To store data in excess of the quota on scratch, users may request a temporary quota increase for up to 30 days. Please contact OSC Help including the following information in a timely manner: 

  1. Your OSC HPC username
  2. Additional space needed
  3. Additional number of files needed
  4. Duration: up to 30 days
  5. Detailed justification

Any quota increase request needs approval by OSC managers. We will discuss alternatives if your request can't be fulfilled. 

Creating directories on scratch storage

Users may also create their own directories. This is a good place to store large amounts of temporary data that you need to keep for a modest amount of time. Files that have not been accessed for some period of time may be deleted. This service should be used only for data that you can regenerate or that you have another copy of. It is not backed up.

Users do not have the ability to directly create directories under /fs/scratch. Please create your own directories under /fs/scratch/<project-code>, where <project-code> is the project account (for example,  PAS1234). The directory /fs/scratch/<project-code> is owned by root, and group <project-code>, with permissions drwxrwx--T.

$PFSDIR  and general scratch usage

The scratch service is a high performance file system that can handle high loads. It should be used by parallel jobs that perform heavy I/O and require a directory that is shared across all nodes. It is also suitable for jobs that require more scratch space than what is available locally. It should be noted that local disk access is faster than any shared file system, so it should be used whenever possible.

In a batch job, users add the node attribute pfsdir in the request (--gres=pfsdir), which is used to automatically create a temporary scratch directory for each job. This directory is used via the environment variable $PFSDIR and is shared across nodes. It exists only for the duration of the job and is automatically deleted by the batch system when the job ends

You should not store executables on the parallel file system. Keep program executables in your home or project directory or in $TMPDIR.

File Deletion Policy

The scratch service is temporary storage, and it is not backed up. Data stored on this service is not recoverable if it is lost for any reason, including user error or hardware failure. Data that have not been accessed for more than or equal to 90 days will be removed from the system every Wednesday. It is a policy violation to use scripts (like touch command) to change the file access time to avoid being deleted.  Any user found to be violating this policy will be contacted; further violations may result in the HPC account being locked.

If you need an exemption to the deletion policy, please contact OSC Help including the following information in a timely manner:

  1. Your OSC HPC username
  2. Path of directories/files that need an exemption to file deletion
  3. Duration: from requested date till MM/DD/YY (The max exemption duration is 90 days)
  4. Detailed justification

Any exemption request needs approval by OSC managers. We will discuss alternatives if your request can't be fulfilled. 

Supercomputer: 
Service: