Ohio researchers have access to many file storage options at OSC. OSC’s research data storage includes high-performance, large capacity spaces along with others that are perfect for a wide variety of research data. OSC has over fourteen petabytes (PB) of disk storage capacity distributed over several file systems, plus more than 5.5 PB of backup tape storage.
Home directories (900 TB avail) are available to all OSC users. This is permanent storage that is backed-up daily.
Project space (with a maximum possible capacity of 12.0 PB) is available as supplemental storage space for groups that require more storage than is provided in the home directories. This storage is also backed-up daily. Project PIs can request this storage from OSC at any time.
OSC’s scratch service is a high performance file system that can handle high loads and is optimized for a wide variety of job types. This service offers work-area storage designed to perform well with data-intensive computations, while scaling well to large numbers of simultaneous connections. It can be used as either batch-managed scratch space or as user-managed temporary space; there is no quota for this system. OSC provides a parallel file system for use as high-performance, high-capacity, shared temporary space. The current capacity of the parallel file system is about 1.1 PB.
Using our web platform, OnDemand, users can transfer smaller files (<10 GB) using simple drag and drop. Other file transfer options include using sftp from a command line or third party interface (like Filezilla).
Globus is a simple but powerful transferring service that allows our users to share data with collaborators anywhere! Any remote research sites that run Globus can seamlessly connect to OSC’s many research storage systems. It also connects research systems to personal systems.
Please send an inquiry to email@example.com with a brief description of the services that you have an interest in and your contact information.
For technical details and documentation on our storage services, please see this page.