Batch Limit Rules

Memory Limit:

It is strongly suggested to consider the memory use to the available per-core memory when users request OSC resources for their jobs. On Owens, it equates to 4GB/core or 124GB/node.

If your job requests less than a full node ( ppn< 28 ), it may be scheduled on a node with other running jobs. In this case, your job is entitled to a memory allocation proportional to the number of cores requested (4GB/core).  For example, without any memory request ( mem=XX ), a job that requests  nodes=1:ppn=1  will be assigned one core and should use no more than 4GB of RAM, a job that requests  nodes=1:ppn=3  will be assigned 3 cores and should use no more than 12GB of RAM, and a job that requests  nodes=1:ppn=28  will be assigned the whole node (28 cores) with 124GB of RAM.  

Please be careful if you include memory request (mem=XX ) in your job. A job that requests  nodes=1:ppn=1,mem=12GB  will be assigned one core and have access to 12GB of RAM, and charged for 3 cores worth of Resource Units (RU).  However, a job that requests  nodes=1:ppn=5,mem=12GB  will be assigned 5 cores but have access to only 12GB of RAM, and charged for 5 cores worth of Resource Units (RU).  See Charging for memory use for more details

A multi-node job ( nodes>1 ) will be assigned the entire nodes with 124 GB/node and charged for the entire nodes regardless of ppn request. For example, a job that requests  nodes=10:ppn=1 will be charged for 10 whole nodes (28 cores/node*10 nodes, which is 280 cores worth of RU).  

A job that requests huge-memory node ( nodes=1:ppn=48  ) will be allocated the entire huge-memory node with 1.5 TB of RAM and charged for the whole node (48 cores worth of RU).

To manage and monitor your memory usage, please refer to Out-of-Memory (OOM) or Excessive Memory Usage.

Walltime Limit

Here are the queues available on Owens:

NAME

MAX WALLTIME

MAX JOB SIZE

NOTES

Serial

 168 hours

1 node

 

longserial 336 hours 1 node
  • Restricted access (contact OSC Help if you need access)

Parallel

96 hours

8 nodes

Jobs are scheduled to run within a single IB leaf switch

Largeparallel

96 hours

81 nodes

Jobs are scheduled across multiple switches

Hugemem

168 hours

1 node

16 nodes in this class
Parallel hugemem 96 hours 16 nodes
  • Restricted access (contact OSC Help if you need access)
  • Use "-q parhugemem" to access it

Debug

1 hour

2 nodes

  • 6 nodes in this class
  • Use "-q debug" to request it 

 

Job/Core Limits

  Max Running Job Limit Soft Max Core/Processor Limit Hard Max Core/Processor Limt
Individual User 256 3080 3080
Project/Group 384 3080 4620

The soft and hard max limits above apply depending on different system resource availability. If resources are scarce, then the soft max limit is used to increase the fairness of allocating resources. Otherwise, if there are idle resources, then the hard max limit is used to increase system utilization.

An individual user can have up to the max concurrently running jobs and/or up to the max processors/cores in use.

However, among all the users in a particular group/project, they can have up to the max concurrently running jobs and/or up to the max processors/cores in use.

A user may have no more than 1000 jobs submitted to both the parallel and serial job queue separately.
Supercomputer: 
Service: