Each user ID has a home directory on one of the NFS shared file systems. You have the same home directory regardless of what system you’re on, including all login nodes and all compute nodes, so your files are accessible everywhere. Most of your work in the login environment will be done in your home directory.
OSC currently has 18 home directory file servers. The absolute path to the home directory for user ID usr1234 will have the form
/nfs/ nn /usr1234 , where
nn is a 2-digit number. The environment variable
$HOME is the absolute path to your home directory.
The default permissions on home directories for academic projects allow anyone with an OSC HPC account to read your files, although only you have write permission. You can change the permissions if you want to restrict access. Home directories for accounts on commercial projects are slightly more restrictive, and only allow the owning account and the project group to see the files by default.
Each user has a quota of 500 gigabytes (GB) of storage and 1,000,000 files. This quota cannot be increased. If you have many small files, you may reach the file limit before you reach the storage limit. In this case we encourage you to “
tar ” or “
zip ” your files or directories, creating an archive. If you approach your storage limit, you should delete any unneeded files and consider compressing your files using
gzip . You can archive/unarchive/compress/uncompress your files inside a batch script, using scratch storage that is not subject to quotas, so your files are still conveniently usable. As always, contact OSC Help if you need assistance.
Home directories are considered permanent storage. Accounts that have been inactive for 18 months may be archived, but otherwise there is no automatic deletion of files.
All files in the home directories are backed up daily. Two copies of files in the home directories are written to tape in the tape library.
Access to home directories is relatively slow compared to local or parallel file systems. Batch jobs should not perform heavy I/O in the home directory tree because 1) it will slow down your job and 2) the home directory file servers don’t handle heavy loads gracefully. Instead you should copy your files to fast local storage and run your program there.
For projects that require more than 500GB storage and/or more than 1,000,000 files, additional storage space is available. Principal Investigators should contact OSC Help to request additional storage in the "project" space outside the home directory. Allocations of one to five terabytes are typical. Small allocations can be granted by OSC staff; for large allocations you will have to submit a proposal to the Statewide Users’ Group (SUG).
Project directories are created on the GPFS filesystem. The absolute path to the project directory for project PRJ0123 will have the following form:
Default permissions on a project directory allow read and write access by all members of the group, with deletion restricted to the file owner. (OSC projects correspond to Linux groups.)
The quota on the project space is shared by all members of the project and corresponds to the allocation that was granted. It is typically 1-5TB with a limit of 1,000,000 files.
Project space is allocated for a specific period of time, usually one to three years. At the end of that time you may apply for an extension.
All files in the project directories are backed up daily, with a single copy written to tape.
The recommendations for archiving and compressing files are the same for project directories as for home directories.
Comments about access speed and file server load for home directories apply also to project directories. Batch jobs should not perform heavy I/O in a project directory.
Each compute node has a local disk used for scratch storage. This space is not shared with any other system or node.
The batch system creates a temporary directory for each job on each node assigned to the job. The absolute path to this directory is in the environment variable
$TMPDIR . The directory exists only for the duration of the job; it is automatically deleted by the batch system when the job ends. Temporary directories are not backed up.
$TMPDIR is a large area where users may execute codes that produce large intermediate files. Local storage has the highest performance of any of the file systems because data does not have to be sent across the network and handled by a file server. Typical usage is to copy input files, and possibly executable files, to
$TMPDIR at the beginning of the job and copy output files to permanent storage at the end of the job. See the batch processing documentation for more information.
The size of the temporary file space on each Oakley node is 812GB; on Glenn it is 392GB.. This area is used for spool space for stdout and stderr from batch jobs as well as for
$TMPDIR . If your job requests less than the entire node, you will be sharing this space with other jobs, although each job has a unique directory in
$TMPDIR and not /tmp on the compute nodes to ensure proper cleanup.
The login nodes have local scratch space in /tmp. This area is not backed up, and the system removes files last accessed more than 24 hours previously.
OSC provides a Lustre parallel file system for use as high-performance, high-capacity, shared temporary space. The current capacity of the parallel file system is about 600TB.
The parallel file system is visible from all OSC HPC systems and all compute nodes at /fs/lustre. It can be used as either batch-managed scratch space or as user-managed temporary space. There is no quota on this system.
The Lustre system replaces the PVFS2 system that was previously available at OSC. There is no need for a special flag such as the :pvfs feature that was used in the past.
The batch system creates a scratch directory for each job on the parallel file system. The absolute path to this directory is in the environment variable
$PFSDIR . This directory is shared across nodes. It exists only for the duration of the job and is automatically deleted by the batch system when the job ends.
Users may also create their own directories under /fs/lustre. Please name the directory with either your user name or your project ID, for example,
/fs/lustre/PRJ0123 . This is a good place to store large amounts of temporary data that you need to keep for up to a few months. Files that have not been accessed for some period of time, currently six months, may be deleted. Check OSC’s data management policy for the official deletion schedule. While this system has been extremely reliable, it should be used only for data that you can regenerate or that you have another copy of. It is not backed up.
The parallel file system is a high performance file system that can handle high loads. It should be used by parallel jobs that perform heavy I/O and require a directory that is shared across all nodes. It is also suitable for jobs that require more scratch space than what is available locally. It should be noted that local disk access is faster than any shared file system, so it should be used whenever possible.
The Lustre file system is optimized for reads and writes that are done in large blocks, preferably at least 4MB. If you perform a lot of small operations, performance will be poor. Consequently, using lots of very small files will result in poor performance.
You should not store executables on the parallel file system. Keep program executables in your home or project directory or in
Those interested in striping should consult our Parallel Scratch Space Striping Guide.
The parallel file system is temporary storage, and it is not backed up. Data stored on this system is not recoverable if it is lost for any reason, including user error or hardware failure. Data that have not been accessed for more than 180 days will be removed from the system every Wednesday.
If you need an exemption to the deletion policy, please contact OSC Help including the following information in a timely manner: