Batch Limit Rules

Full Node Charging Policy

On Ruby, we always allocate whole nodes to jobs and charge for the whole node. If a job requests less than a full node (nodes=1:ppn<20), the job execution environment is what is requested (the job only has access to the # of cores according to ppn request) with 64GB of RAM; however, the job will be allocated whole node and charge for the whole node. A job that requests nodes>1 will be assigned the entire nodes with 64GB/node and charged for the entire nodes regardless of ppn request.  A job that requests huge-memory node (nodes=1:ppn=32) will be allocated the entire huge-memory node with 1TB of RAM and charged for the whole node (32 cores worth of RU).

To manage and monitor your memory usage, please refer to Out-of-Memory (OOM) or Excessive Memory Usage.

Queue Default

Please keep in mind that if you submits a job with no node specification, the default is nodes=1:ppn=20, while if you submits a job with no ppn specified, the default is nodes=N:ppn=1

Debug Node

Ruby has 4 debug nodes which are specifically configured for short (< 1 hour) debugging type work. These nodes have a walltime limit of 1 hour. These nodes, consisting of 2 non-GPU nodes and 2 GPU nodes (with 2 GPUs per node), are equipped with E5-2670 V1 CPUs with 16 cores per a node. Users are allowed to request a partial node with debug nodes. 

  • To schedule a 1-core non-GPU debug nodes: nodes=1:ppn=1 -q debug
  • To schedule a non-GPU debug node: nodes=1:ppn=16 -q debug
  • To schedule two non-GPU debug nodes: nodes=2:ppn=16 -q debug
  • To schedule a GPU debug node: nodes=1:ppn=16:gpus=2 -q debug
  • To schedule two GPU debug nodes: nodes=2:ppn=16:gpus=2 -q debug

GPU Node

On Ruby, 20 nodes are equipped with NVIDIA Tesla K40 GPUs (one GPU with each node).  These nodes can be requested by adding gpus=1 to your nodes request (nodes=1:ppn=20:gpus=1). 

Walltime Limit

Here are the queues available on Ruby:






168 hours

1 node



96 hours

40 nodes



48 hours

1 node

32 core with 1 TB RAM


1 hour

2 nodes (either GPU or non-GPU)

16 core with 128GB RAM

Job/Core Limits

  Soft Max Running Job limit Hard Max Running Job Limit Soft Max Core Limit Hard Max Core Limit
Individual User 40 40 800 800
Project/Group 80 160 1600 3200

The soft and hard max limits above apply depending on different system resource availability. If resources are scarce, then the soft max limit is used to increase the fairness of allocating resources. Otherwise, if there are idle resources, then the hard max limit is used to increase system utilization.

An individual user can have up to the max concurrently running jobs and/or up to the max processors/cores in use. 

However, among all the users in a particular group/project, they can have up to the max concurrently running jobs and/or up to the max processors/cores in use.

Debug queue is one job at a time per user. Condo users, please contact OSC Help for more instructions.
A user may have no more than 1000 jobs submitted to both the parallel and serial job queue separately.