Storage Environment at OSC

OSC has over five petabytes (PB) of disk storage capacity distributed over several file systems, plus more than 5.5PB of backup tape storage. (A petabyte is 1015, or a quadrillion, bytes.) This guide describes the various storage environments, their characteristics, and their uses.

Service: 

Storage Hardware

The storage at OSC consists of servers, data storage subsystems, and networks providing a number of storage services to OSC HPC systems. The current configuration consists of:

  • A NetApp WAFL service providing 900 TB of storage and 10 GB/s bandwidth for home directories
  • An IBM Elastic Storage service providing 3,400 TB of storage and 40 to 50 GB/s bandwidth for project storage
  • An IBM Elastic Storage service providing 1,000 TB of storage and 100 GB/s bandwidth for global scratch
  • local disk storage on each compute node
  • One IBM 3584 tape robot:
    • 16 LTO tape drives
    • 5,500 TB (raw capacity) of LTO tapes

 

Service: 

File System Usage

OSC has several different file systems where you can create files and directories. The characteristics of those systems and the policies associated with them determine their suitability for any particular purpose. This section describes the characteristics and policies that you should take into consideration in selecting a file system to use.

The various file systems are described in subsequent sections.

Visibility

Most of our file systems are shared. Directories and files on the shared file systems are accessible from all OSC HPC systems. By contrast, local storage is visible only on the node it is located on. Each compute node has a local disk with scratch file space.

Permanence

Some of our storage environments are intended for long-term storage; files are never deleted by the system or OSC staff. Some are intended as scratch space, with files deleted as soon as the associated job exits. Others fall somewhere in between, with expected data lifetimes of a few months to a couple of years.

Backup policies

Some of the file systems are backed up to tape; some are considered temporary storage and are not backed up. Backup schedules differ for different systems.

In no case do we make an absolute guarantee about our ability to recover data. Please read the official OSC data management policies for details. That said, we have never lost backed-up data and have rarely had an accidental loss of non-backed-up data.

Size/Quota

The permanent (backed-up) file systems all have quotas limiting the amount of file space and the number of files that each user or group can use. Your usage and quota information are displayed every time you log in to one of our HPC systems. You can also check them using the quota command. We encourage you to pay attention to these numbers because your file operations, and probably your compute jobs, will fail if you exceed them.

Scratch space on local disks doesn’t have a quota, but it is limited in size. If you have extremely large files, you will have to pay attention to the amount of local file space available on different compute nodes.

Performance

File systems have different performance characteristics including read/write speeds and behavior under heavy load. Performance matters a lot if you have I/O-intensive jobs. Choosing the right file system can have a significant impact on the speed and efficiency of your computations. You should never do heavy I/O in your home or project directories, for example.

Service: 

Available File Systems

Home Directory Service

Each user ID has a home directory on the NetApp WAFL service. You have the same home directory regardless of what system you’re on, including all login nodes and all compute nodes, so your files are accessible everywhere. Most of your work in the login environment will be done in your home directory.

OSC currently has a high performance NetApp applicance to provide home directories. The absolute path to the home directory for user ID usr1234 will have the form /users/ project /usr1234 , where project  is the default project for the account (for example, PAS1234). The environment variable $HOME is the absolute path to your home directory. You should use $HOME or ~/ instead of absolute paths to refer to your home directory wherever possible.

The default permissions on home directories for academic projects allow anyone with an OSC HPC account to read your files, although only you have write permission. You can change the permissions if you want to restrict access. Home directories for accounts on commercial projects are slightly more restrictive, and only allow the owning account and the project group to see the files by default.

Each user has a quota of 500 gigabytes (GB) of storage and 1,000,000 files. This quota cannot be increased. If you have many small files, you may reach the file limit before you reach the storage limit. In this case we encourage you to “ tar ” or “ zip ” your files or directories, creating an archive. If you approach your storage limit, you should delete any unneeded files and consider compressing your files using bzip or gzip . You can archive/unarchive/compress/uncompress your files inside a batch script, using scratch storage that is not subject to quotas, so your files are still conveniently usable. As always, contact OSC Help if you need assistance.

Home directories are considered permanent storage. Accounts that have been inactive for 18 months may be archived, but otherwise there is no automatic deletion of files.

All files in the home directories are backed up daily. Two copies of files in the home directories are written to tape in the tape library.

Access to home directories is relatively slow compared to local or parallel file systems. Batch jobs should not perform heavy I/O in the home directory tree because it will slow down your job. Instead you should copy your files to fast local storage and run your program there.

Project Storage Service

For projects that require more than 500GB storage and/or more than 1,000,000 files, additional storage space is available. Principal Investigators should contact OSC Help to request additional storage on this service, outside the home directory. Allocations of one to five terabytes are typical. Small allocations can be granted by OSC staff; for large allocations you will have to submit a proposal to the Statewide Users’ Group (SUG).

Project directories are created on the Project filesystem. The absolute path to the project directory for project PRJ0123 will have the following form:  /fs/project/PRJ0123 .

Default permissions on a project directory allow read and write access by all members of the group, with deletion restricted to the file owner. (OSC projects correspond to Linux groups.)

The quota on the project space is shared by all members of the project and corresponds to the allocation that was granted.  It is typically 1-5TB with a limit of 1,000,000 files.

Project space is allocated for a specific period of time, usually one to three years. At the end of that time you may apply for an extension.

All files in the project directories are backed up daily, with a single copy written to tape.

The recommendations for archiving and compressing files are the same for project directories as for home directories.

 

Filesystem performance is better than home directories, but for certain workloads scratch space local to the compute nodes will be a better choice.

Local Disk

Each compute node has a local disk used for scratch storage. This space is not shared with any other system or node.

The batch system creates a temporary directory for each job on each node assigned to the job. The absolute path to this directory is in the environment variable $TMPDIR . The directory exists only for the duration of the job; it is automatically deleted by the batch system when the job ends. Temporary directories are not backed up.

$TMPDIR is a large area where users may execute codes that produce large intermediate files. Local storage has the highest performance of any of the file systems because data does not have to be sent across the network and handled by a file server. Typical usage is to copy input files, and possibly executable files, to $TMPDIR at the beginning of the job and copy output files to permanent storage at the end of the job. See the batch processing documentation for more information.

The size of the temporary file space on each Owens node is 1500GB; on Ruby it is 857GB; on Oakley it is 812GB. There are 8 data analytics nodes on Owens with 24x 2TB scratch drives. This area is used for spool space for stdout and stderr from batch jobs as well as for $TMPDIR .  If your job requests less than the entire node, you will be sharing this space with other jobs, although each job has a unique directory in $TMPDIR .

Please use $TMPDIR and not /tmp on the compute nodes to ensure proper cleanup.

The login nodes have local scratch space in /tmp. This area is not backed up, and the system removes files last accessed more than 24 hours previously.

Scratch Service

OSC provides a parallel file system for use as high-performance, high-capacity, shared temporary space. The current capacity of the parallel file system is about 1,200TB.

The scratch service is visible from all OSC HPC systems and all compute nodes at /fs/scratch. It can be used as either batch-managed scratch space or as user-managed temporary space. There is no quota on this system.

The batch system creates a scratch directory for each job on the scratch service. The absolute path to this directory is in the environment variable $PFSDIR . This directory is shared across nodes. It exists only for the duration of the job and is automatically deleted by the batch system when the job ends.

Users may also create their own directories under /fs/scratch. Please name the directory with either your user name or your project ID, for example, /fs/scratch/usr1234 or /fs/scratch/PRJ0123 . This is a good place to store large amounts of temporary data that you need to keep for a modest amount of time. Files that have not been accessed for some period of time, currently six months, may be deleted. Check OSC’s data management policy for the official deletion schedule. This service should be used only for data that you can regenerate or that you have another copy of. It is not backed up.

The scratch service is a high performance file system that can handle high loads. It should be used by parallel jobs that perform heavy I/O and require a directory that is shared across all nodes. It is also suitable for jobs that require more scratch space than what is available locally. It should be noted that local disk access is faster than any shared file system, so it should be used whenever possible.

You should not store executables on the parallel file system. Keep program executables in your home or project directory or in $TMPDIR .

File Deletion Policy

The scratch service is temporary storage, and it is not backed up. Data stored on this service is not recoverable if it is lost for any reason, including user error or hardware failure. Data that have not been accessed for more than 180 days will be removed from the system every Wednesday.  

If you need an exemption to the deletion policy, please contact OSC Help including the following information in a timely manner:

  1. Your OSC HPC username
  2. Path of directories/files that need exemption to file deletion
  3. Duration: from MM/DD/YY to MM/DD/YY (The max exemption duration is 180 days)
  4. Detailed justification
Service: 

2016 Storage Service Upgrades

On July 12th, 2016 OSC migrated its old GPFS and Lustre filesystems to new Project and Scratch services, respectively. We've moved 1.22 PB of data, and the new capacities are 3.4 PB for Project, and 1.1 PB for Scratch. If you store data on these services, there are a few important details to note.

Paths have changed

The Project service is now available at /fs/project, and the Scratch service is available at /fs/scratch. We have created symlinks on the Oakley and Ruby clusters to ensure that existing job scripts continue to function; however the symlinks will not be available on future systems, such as Owens. No action is required on your part to continue using your existing job scripts on current clusters.

However, you may wish to start updating your paths accordingly, in preparation for Owens being available later this year.

Data migration details

Project space allocations and Scratch space data was migrated automatically to the new services. For data on the Project service, ACLs, Xattrs, and Atimes were all preserved. However, Xattrs were not preserved for data on the Scratch service.

Additionally, Biomedical Informatics at The Ohio State University had some data moved from a temporary location to its permanent location on the Project service. We had prepared for this, and already provided symlinks so that the data appeared to be in the final location prior to the July 12th downtime, so the move should be mostly transparent to users. However, ALCs, Xattrs, and Atimes were not preserved for this data.

File system

Transfer method

ACLs preserved

Xattrs preserved

Atime preserved

/fs/project

AFM

Yes

Yes

Yes

/fs/lustre

rsync

Yes

No

Yes

/users/bmi

rsync

No

No

No

Full documentation

Full details and documentation of the new service capacities and capabilities are available at https://www.osc.edu/supercomputing/storage-environment-at-osc/

Service: