Batch Limit Rules

Memory Limit:

It is strongly suggested to consider the available per-core memory when users request OSC resources for their jobs.

Regular Dense Compute Node

On Owens, it equates to 4,315MB/core or 120,820 MB/node (117.98GB/node) for the regular dense compute node. 

If your job requests less than a full node ( ntasks< 28 ), it may be scheduled on a node with other running jobs. In this case, your job is entitled to a memory allocation proportional to the number of cores requested (4315MB/core).  For example, without any memory request ( mem=XXMB ), a job that requests  --nodes=1 --ntasks=1 will be assigned one core and should use no more than 4315MB of RAM, a job that requests  --nodes=1 --ntasks=3  will be assigned 3 cores and should use no more than 3*4315MB of RAM, and a job that requests --nodes=1 --ntasks=28 will be assigned the whole node (28 cores) with 118GB of RAM.  

Here is some information when you include memory request (mem=XX ) in your job. A job that requests  --nodes=1 --ntasks=1 --mem=12GB  will be assigned three cores and have access to 12GB of RAM, and charged for 3 cores worth of usage (in other ways, the request --ntasks is ingored).  A job that requests  --nodes=1 --ntasks=5 --mem=12GB  will be assigned 5 cores but have access to only 12GB of RAM, and charged for 5 cores worth of usage. 

A multi-node job ( nodes>1 ) will be assigned the entire nodes with 118 GB/node and charged for the entire nodes regardless of ppn request. For example, a job that requests  --nodes=10 --ntasks-per-node=1 will be charged for 10 whole nodes (28 cores/node*10 nodes, which is 280 cores worth of usage).  

Huge Memory Node

Beginning on Tuesday, March 10th, 2020, users are able to run jobs using less than a full huge memory node. Please read the following instructions carefully before requesting a huge memory node on Owens. 

On Owens, it equates to 31,850MB/core or 1,528,800MB/node (1,492.96GB/node) for a huge memory node.

To request no more than a full huge memory node, you have two options:

  • The first is to specify the memory request between 120,820MB (117.98GB) and 1,528,800MB (1,492.96GB), i.e., 120820MB < mem <=1528800MB ( 118GB < mem < 1493GB). Note: you can only use interger for request
  • The other option is to use the combination of --ntasks-per-node and --partition, like --ntasks-per-node=4 --partition=hugemem . When no memory is specified for the huge memory node, your job is entitled to a memory allocation proportional to the number of cores requested (31,850MB/core). Note, --ntasks-per-node should be no less than 4 and no more than 48

To manage and monitor your memory usage, please refer to Out-of-Memory (OOM) or Excessive Memory Usage.

GPU Jobs

There is only one GPU per GPU node on Owens.

For serial jobs, we allow node sharing on GPU nodes so a job may request any number of cores (up to 28)

(--nodes=1 --ntasks=XX --gpus-per-node=1)

For parallel jobs (n>1), we do not allow node sharing. 

See this GPU computing page for more information. 

Walltime Limit

Here are the partitions available on Owens:

Name Max walltime Min job size Max job size notes

Serial

168 hours

1 core

1 node

Longserial

 336 hours

1 core

1 node

  • Restricted access (contact OSC Help if you need access)
  • To request it, add --partition=longserial

Parallel

96 hours

2 nodes

81 nodes

 

GPU Serial 168 hours 1 core 1 node    
GPU Parallel 96 hours 2 nodes 8 nodes  

Hugemem

96 hours

1 core

1 node

 
Parhugemem 96 hours 2 nodes 16 nodes
  • Restricted access (contact OSC Help if you need access)
  • To request it, add --partition=hugemem-parallel
Debug 1 hour 1 core 2 nodes
  • For small interactive and test jobs
  • To request it, add --partition=debug
GPU Debug 1 hour 1 core 2 nodes
  • For small interactive and test GPU jobs
  • To request it, add --partition=gpudebug

Job/Core Limits

  Max Running Job Limit  Max Core/Processor Limit
  For all types GPU jobs Regular debug jobs GPU debug jobs For all types
Individual User 384 132 4 4 3080
Project/Group 576 132 n/a n/a 3080

An individual user can have up to the max concurrently running jobs and/or up to the max processors/cores in use.

However, among all the users in a particular group/project, they can have up to the max concurrently running jobs and/or up to the max processors/cores in use.

A user may have no more than 1000 jobs submitted to both the parallel and serial job queue separately.
Supercomputer: 
Service: