Technical Support

Technical Support icon

OSC Help consists of technical support and consulting services for OSC's high performance computing resources. Members of OSC's HPC Client Services group comprise OSC Help.

Before contacting OSC Help, please check to see if your question is answered in either the FAQ or the Knowledge Base. Many of the questions asked by both new and experienced OSC users are answered in these web pages.

If you still cannot solve your problem, please do not hesitate to contact OSC Help:

All calls will be transferred to voicemail, and an OSC staff member will contact you as soon as possible.

Phone: (614) 292-1800
Email: oschelp@osc.edu
Submit your issue online

OSC Help hours of operation:

Basic and advanced support are available Monday through Friday, 9 a.m.–5 p.m. (Eastern time zone), except OSU holidays

OSC users also have the ability to directly impact OSC operational decisions by participating in the Statewide Users Group. Activities include managing the allocation process, advising on software licensing and hardware acquisition.

We recommend following HPCNotices on X to get up-to-the-minute information on system outages and important operations-related updates.

HPC Changelog

Changes to HPC systems are listed below, optionally filtered by system.

MVAPICH2 version 2.3 modules modified on Owens

Replace MV2_ENABLE_AFFINITY=0 with MV2_CPU_BINDING_POLICY=hybrid.

Known issues

Unresolved known issues

Known issue with an Unresolved Resolution state is an active problem under investigation; a temporary workaround may be available.

Resolved known issues

A known issue with a Resolved (workaround) Resolution state is an ongoing problem; a permanent workaround is available which may include using different software or hardware.

A known issue with Resolved Resolution state has been corrected.

Search Documentation

Search our client documentation below, optionally filtered by one or more systems.

Supercomputer: 

Supercomputers

We currently operate three major systems:

  • Owens Cluster, a 23,000+ core Dell Intel Xeon machine
  • Ruby Cluster, an 4800 core HP Intel Xeon machine
    • 20 nodes have Nvidia Tesla K40 GPUs
    • One node has 1 TB of RAM and 32 cores, for large SMP style jobs.
  • Pitzer Cluster, an 10,500+ core Dell Intel Xeon machine

Our clusters share a common environment, and we have several guides available.

OSC also provides more than 5 PB of storage, and another 5.5 PB of tape backup.

  • Learn how that space is made available to users, and how to best utilize the resources, in our storage environment guide.

Finally, you can keep up to date with any known issues on our systems (and the available workarounds). An archive of resolved issues can be found here.

Service: 

Ascend

Access to Ascend cluster is granted upon request after an evaluation. Please check this request access page if you are interested. 
TIP: Remember to check the menu to the right of the page for related pages with more information about Ascend's specifics.

OSC's Ascend cluster was installed in fall 2022 and is a Dell-built, AMD EPYC™ CPUs with NVIDIA A100 80GB GPUs cluster devoted entirely to intensive GPU processing. 

2022_1214 Cluster Graphic_Ascend.jpg

Hardware

ascend2.jpg

 

Detailed system specifications:

  • 24 Power Edge XE 8545 nodes, each with:
    • 2 AMD EPYC 7643 (Milan) processors (2.3 GHz, each with 44 usable cores) 
    • 4 NVIDIA A100 GPUs with 80GB memory each, supercharged by NVIDIA NVLink
    • 921GB usable RAM
    • 12.8TB NVMe internal storage​
  • 2,112 total usable cores
    • 88 cores/node & 921GB of memory/node
  • Mellanox/NVIDA 200 Gbps HDR InfiniBand​
  • Theoretical system peak performance
    • 1.95 petaflops
  • 2 login nodes
    •  IP address: 192.148.247.[180-181]

How to Connect

  • SSH Method

To login to Ascend at OSC, ssh to the following hostname:

ascend.osc.edu 

You can either use an ssh client application or execute ssh on the command line in a terminal window as follows:

ssh <username>@ascend.osc.edu

You may see a warning message including SSH key fingerprint. Verify that the fingerprint in the message matches one of the SSH key fingerprints listed here, then type yes.

From there, you are connected to the Ascend login node and have access to the compilers and other software development tools. You can run programs interactively or through batch requests. We use control groups on login nodes to keep the login nodes stable. Please use batch jobs for any compute-intensive or memory-intensive work. See the following sections for details.

  • OnDemand Method

You can also login to Ascend at OSC with our OnDemand tool. The first step is to log into OnDemand. Then once logged in you can access Ascend by clicking on "Clusters", and then selecting ">_Ascend Shell Access".

Instructions on how to connect to OnDemand can be found at the OnDemand documentation page.

File Systems

Ascend accesses the same OSC mass storage environment as our other clusters. Therefore, users have the same home directory as on the old clusters. Full details of the storage environment are available in our storage environment guide.

Software Environment

The module system on Ascend is the same as on the Owens and Pitzer systems. Use  module load <package>  to add a software package to your environment. Use  module list  to see what modules are currently loaded and  module avail  to see the modules that are available to load. To search for modules that may not be visible due to dependencies or conflicts, use  module spider

You can keep up to the software packages that have been made available on Ascend by viewing the Software by System page and selecting the Ascend system.

Batch Specifics

Refer to this Slurm migration page to understand how to use Slurm on the Ascend cluster.  

Using OSC Resources

For more information about how to use OSC resources, please see our guide on batch processing at OSC. For specific information about modules and file storage, please see the Batch Execution Environment page.

Ascend Programming Environment

Compilers

C, C++ and Fortran are supported on the Ascend cluster. Intel, oneAPI, GNU, nvhpc, and aocc compiler suites are available. The Intel development tool chain is loaded by default. Compiler commands and recommended options for serial programs are listed in the table below. See also our compilation guide.

The Milan processors from AMD that make up Ascend support the Advanced Vector Extensions (AVX2) instruction set, but you must set the correct compiler flags to take advantage of it. AVX2 has the potential to speed up your code by a factor of 4 or more, depending on the compiler and options you would otherwise use. However, bare in mind that clock speeds decrease as the level of the instruction set increases. So, if your code does not benefit from vectorization it may be beneficial to use a lower instruction set.

In our experience, the Intel compiler usually does the best job of optimizing numerical codes and we recommend that you give it a try if you’ve been using another compiler.

With the Intel compilers, use -xHost and -O2 or higher. With the GNU compilers, use -march=native and -O3

This advice assumes that you are building and running your code on Ascend. The executables will not be portable.  Of course, any highly optimized builds, such as those employing the options above, should be thoroughly validated for correctness.

LANGUAGE INTEL GNU
C icc -O2 -xHost hello.c gcc -O3 -march=native hello.c
Fortran 77/90 ifort -O2 -xHost hello.F gfortran -O3 -march=native hello.F
C++ icpc -O2 -xHost hello.cpp g++ -O3 -march=native hello.cpp

Parallel Programming

MPI

OSC systems use the MVAPICH2 implementation of the Message Passing Interface (MPI), optimized for the high-speed Infiniband interconnect. MPI is a standard library for performing parallel processing using a distributed-memory model. For more information on building your MPI codes, please visit the MPI Library documentation.

MPI programs are started with the srun command. For example,

#!/bin/bash
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=84

srun [ options ] mpi_prog
Note: the program to be run must either be in your path or have its path specified.

The srun command will normally spawn one MPI process per task requested in a Slurm batch job. Use the -n ntasks and/or --ntasks-per-node=n option to change that behavior. For example,

#!/bin/bash
#SBATCH --nodes=2

# Use the maximum number of CPUs of two nodes
srun ./mpi_prog

# Run 8 processes per node
srun -n 16 --ntasks-per-node=8  ./mpi_prog

The table below shows some commonly used options. Use srun -help for more information.

OPTION COMMENT
-n, --ntasks=ntasks total number of tasks to run
--ntasks-per-node=n number of tasks to invoke on each node
-help Get a list of available options
Note: The information above applies to the MVAPICH2, Intel MPI and OpenMPI installations at OSC. 
Caution: mpiexec or mpirun is still supported with Intel MPI and OpenMPI, but it is not fully compatible in our Slurm environment. We recommand using srun in any circumstances.

OpenMP

The Intel, and GNU compilers understand the OpenMP set of directives, which support multithreaded programming. For more information on building OpenMP codes on OSC systems, please visit the OpenMP documentation.

An OpenMP program by default will use a number of threads equal to the number of CPUs requested in a Slurm batch job. To use a different number of threads, set the environment variable OMP_NUM_THREADS. For example,

#!/bin/bash
#SBATCH --ntasks=8

# Run 8 threads
./omp_prog

# Run 4 threads
export OMP_NUM_THREADS=4
./omp_prog

Interactive job only

Please use -c, --cpus-per-task=X instead of -n, --ntasks=X to request an interactive job. Both result in an interactive job with X CPUs available but only the former option automatically assigns the correct number of threads to the OpenMP program. If  the option --ntasks is used only, the OpenMP program will use one thread or all threads will be bound to one CPU core. 

Hybrid (MPI + OpenMP)

An example of running a job for hybrid code:

#!/bin/bash
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=80

# Run 4 MPI processes on each node and 40 OpenMP threads spawned from a MPI process
export OMP_NUM_THREADS=40
srun -n 8 -c 40 --ntasks-per-node=4 ./hybrid_prog

Tuning Parallel Program Performance: Process/Thread Placement

To get the maximum performance, it is important to make sure that processes/threads are located as close as possible to their data, and as close as possible to each other if they need to work on the same piece of data, with given the arrangement of node, sockets, and cores, with different access to RAM and caches. 

While cache and memory contention between threads/processes are an issue, it is best to use scatter distribution for code. 

Processes and threads are placed differently depending on the computing resources you requste and the compiler and MPI implementation used to compile your code. For the former, see the above examples to learn how to run a job on exclusive nodes. For the latter, this section summarizes the default behavior and how to modify placement.

OpenMP only

For all two compilers (Intel, GNU), purely threaded codes do not bind to particular CPU cores by default. In other words, it is possible that multiple threads are bound to the same CPU core

The following table describes how to modify the default placements for pure threaded code:

DISTRIBUTION Compact Scatter/Cyclic
DESCRIPTION Place threads close to each other as possible in successive order Distribute threads as evenly as possible across sockets
INTEL KMP_AFFINITY=compact KMP_AFFINITY=scatter
GNU OMP_PROC_BIND=true
OMP_PLACE=cores
OMP_PROC_BIND=true
OMP_PLACE="{0},{48},{1},{49},..."[1]
  1. The core IDs on the first and second sockets start with 0 and 48, respectively.

MPI Only

For MPI-only codes, MVAPICH2 first binds as many processes as possible on one socket, then allocates the remaining processes on the second socket so that consecutive tasks are near each other. Intel MPI and OpenMPI alternately bind processes on socket 1, socket 2, socket 1, socket 2 etc, as cyclic distribution.

For process distribution across nodes, all MPIs first bind as many processes as possible on one node, then allocates the remaining processes on the second node. 

The following table describe how to modify the default placements on single node for MPI-only code with the command srun:

DISTRIBUTION
(single node)
Compact Scatter/Cyclic
DESCRIPTION Place processs close to each other as possible in successive order Distribute process as evenly as possible across sockets
MVAPICH2[1] Default MV2_CPU_BINDING_POLICY=scatter
INTEL MPI srun --ntasks=84 --cpu-bind="map_cpu:$(seq -s, 0 43),$(seq -s, 48 95)" Default
OPENMPI srun --ntasks=84 --cpu-bind="map_cpu:$(seq -s, 0 43),$(seq -s, 48 95)" Default
  1. MV2_CPU_BINDING_POLICY will not work if MV2_ENABLE_AFFINITY=0 is set.

To distribute processes evenly across nodes, please set SLURM_DISTRIBUTION=cyclic.

Hybrid (MPI + OpenMP)

For Hybrid codes, each MPI process is allocated  OMP_NUM_THREADS cores and the threads of each process are bound to those cores. All MPI processes (as well as the threads bound to the process) behave as we describe in the previous sections. It means the threads spawned from a MPI process might be bound to the same core. To change the default process/thread placmements, please refer to the tables above. 

Summary

The above tables list the most commonly used settings for process/thread placement. Some compilers and Intel libraries may have additional options for process and thread placement beyond those mentioned on this page. For more information on a specific compiler/library, check the more detailed documentation for that library.

GPU Programming

96 NVIDIA A100 GPUs are available on Ascend.  Please visit our GPU documentation.

Reference

Supercomputer: 

Batch Limit Rules

We use Slurm syntax for all the discussions on this page. Please check how to prepare slurm job script if your script is prepared in PBS syntax. 

Memory limit

It is strongly suggested to consider the memory use to the available per-core memory when users request OSC resources for their jobs.

Summary

Node type default memory per core (GB) max usable memory per node (GB)
gpu (4 gpus) - 88 cores 10.4726 GB 921.5937 GB

It is recommended to let the default memory apply unless more control over memory is needed.
Note that if an entire node is requested, then the job is automatically granted the entire node's main memory. On the other hand, if a partial node is requested, then memory is granted based on the default memory per core.

See a more detailed explanation below.

Default memory limits

A job can request resources and allow the default memory to apply. If a job requires 300 GB for example:

#SBATCH --ntasks=1
#SBATCH --cpus-per-task=30

This requests 30 cores, and each core will automatically be allocated 10.4 GB of memory (30 core * 10 GB memory = 300 GB memory).

Explicit memory requests

If needed, an explicit memory request can be added:

#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4
#SBATCH --mem=300G
Job charging is determined either by number of cores or amount of memory.
See Job and storage charging for details.

CPU only jobs

Dense gpu nodes on Ascend have 88 cores each. However, cpuonly partition jobs may only request 84 cores per node.

An example request would look like:

#!/bin/bash

#SBATCH --partition=cpuonly
#SBATCH --time=5:00:00
#SBATCH --nodes=1
#SBATCH --ntasks=2
#SBATCH --cpus-per-task=42

# requests 2 tasks * 42 cores each = 84 cores
<snip>

GPU Jobs

Jobs may request only parts of gpu node. These jobs may request up to the total cores on the node (88 cores).

Requests two gpus for one task:

#SBATCH --time=5:00:00
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=20
#SBATCH --gpus-per-task=2

Requests two gpus, one for each task:

#SBATCH --time=5:00:00
#SBATCH --nodes=1
#SBATCH --ntasks=2
#SBATCH --cpus-per-task=10
#SBATCH --gpus-per-task=1

Of course, jobs can request all the gpus of a dense gpu node as well. These jobs have access to all cores as well.

Request an entire dense gpu node:

#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=88
#SBATCH --gpus-per-node=4

Partition time and job size limits

Here is the walltime and node limits per job for different queues/partitions available on Ascend:

NAME

MAX TIME LIMIT
(dd-hh:mm:ss)

MIN JOB SIZE

MAX JOB SIZE

NOTES

cpuonly

4-00:00:00

1 core

4 nodes

This partition may not request gpus
(as the name implies)

84 cores per node only

gpu

7-00:00:00

 1 core
1 gpu

4 nodes 

 
debug 1:00:00 1 core 2 nodes  

Usually, you do not need to specify the partition for a job and the scheduler will assign the right partition based on the requested resources. To specify a partition for a job, either add the flag --partition=<partition-name> to the sbatch command at submission time or add this line to the job script:
#SBATCH --paritition=<partition-name>

    Job/Core Limits

    Max Running Job Limit Max Core/Processor Limit Max GPU limit
      For all types GPU debug jobs For all types  
    Individual User 256 4

    704

    32
    Project/Group 512 n/a 704 32

    An individual user can have up to the max concurrently running jobs and/or up to the max processors/cores in use. However, among all the users in a particular group/project, they can have up to the max concurrently running jobs and/or up to the max processors/cores in use.

    A user may have no more than 1000 jobs submitted to the parallel queue.
    Supercomputer: 
    Service: 

    Migrating jobs from other clusters

    This page includes a summary of differences to keep in mind when migrating jobs from other clusters to Ascend. 

    Guidance for Pitzer Users

    Hardware Specifications

      Ascend (PER NODE) Pitzer (PER NODE)
    Regular compute node n/a

    40 cores and 192GB of RAM

    48 cores and 192GB of RAM

    Huge memory node

    n/a

    48 cores and 768GB of RAM

    (12 nodes in this class)

    80 cores and 3.0 TB of RAM

    (4 nodes in this class)

    GPU Node

    88 cores and 921GB RAM

    4 GPUs per node

    (24 nodes in this class)

    40 cores and 192GB of RAM, 2 GPUs per node

    (32 nodes in this class)

    48 cores and 192GB of RAM, 2 GPUs per node

    (42 nodes in this class)

    Guidance for Owens Users

    Hardware Specifications

      Ascend (PER NODE) Owens (PER NODE)
    Regular compute node n/a 28 cores and 125GB of RAM
    Huge memory node n/a

    48 cores and 1.5TB of RAM

    (16 nodes in this class)

    GPU node

    88 cores and 921GB RAM

    4 GPUs per node

    (24 nodes in this class)

    28 cores and 125GB of RAM, 1 GPU per node

    (160 nodes in this class)

    File Systems

    Ascend accesses the same OSC mass storage environment as our other clusters. Therefore, users have the same home directory, project space, and scratch space as on the other clusters.

    Software Environment

    Ascend uses the same module system as other OSC Clusters.

    Use   module load <package to add a software package to your environment. Use   module list   to see what modules are currently loaded and  module avail   to see the modules that are available to load. To search for modules that may not be visible due to dependencies or conflicts, use   module spider 

    You can keep up to on the software packages that have been made available on Ascend by viewing the Software by System page and selecting the Ascend system.

    Programming Environment

    C, C++ and Fortran are supported on the Ascend cluster. Intel, oneAPI, GNU, nvhpc, and aocc compiler suites are available. The Intel development tool chain is loaded by default. To switch to a different compiler, use  module swap . Ascend also uses the MVAPICH2 implementation of the Message Passing Interface (MPI).

    See the Ascend Programming Environment page for details. 

    Supercomputer: 
    Service: 

    Ascend SSH key fingerprints

    These are the public key fingerprints for Ascend:
    ascend: ssh_host_rsa_key.pub = 2f:ad:ee:99:5a:f4:7f:0d:58:8f:d1:70:9d:e4:f4:16
    ascend: ssh_host_ed25519_key.pub = 6b:0e:f1:fb:10:da:8c:0b:36:12:04:57:2b:2c:2b:4d
    ascend: ssh_host_ecdsa_key.pub = f4:6f:b5:d2:fa:96:02:73:9a:40:5e:cf:ad:6d:19:e5

    These are the SHA256 hashes:​
    ascend: ssh_host_rsa_key.pub = SHA256:4l25PJOI9sDUaz9NjUJ9z/GIiw0QV/h86DOoudzk4oQ
    ascend: ssh_host_ed25519_key.pub = SHA256:pvz/XrtS+PPv4nsn6G10Nfc7yM7CtWoTnkgQwz+WmNY
    ascend: ssh_host_ecdsa_key.pub = SHA256:giMUelxDSD8BTWwyECO10SCohi3ahLPBtkL2qJ3l080

    Supercomputer: 

    Citation

    For more information about citations of OSC, visit https://www.osc.edu/citation.

    To cite Ascend, please use the following Archival Resource Key:

    ark:/19495/hpc3ww9d

    Please adjust this citation to fit the citation style guidelines required.

    Ohio Supercomputer Center. 2022. Ascend Supercomputer. Columbus, OH: Ohio Supercomputer Center. http://osc.edu/ark:/19495/hpc3ww9d

    Here is the citation in BibTeX format:

    @misc{Ascend2022,
    ark = {ark:/19495/hpc3ww9d},
    url = {http://osc.edu/ark:/19495/hpc3ww9d},
    year  = {2022},
    author = {Ohio Supercomputer Center},
    title = {Ascend Supercomputer}
    }
    

    And in EndNote format:

    %0 Generic
    %T Ascend Supercomputer
    %A Ohio Supercomputer Center
    %R ark:/19495/hpc3ww9d
    %U http://osc.edu/ark:/19495/hpc3ww9d
    %D 2022
    
    Supercomputer: 

    Request access

    Users who would like to use the Ascend cluster will need to request access.  This is because of the particulars of the Ascend environment, which includes its size, GPUs, and scheduling policies.

    Motivation

    Access to Ascend is done on a case by case basis because:

    • All nodes on Ascend are with 4 GPUs, and therefore it favors GPU work instead of CPU-only work
    • It is a smaller machine than Pitzer and Owens, and thus has limited space for users

    Good Ascend Workload Characteristics

    Those interested in using Ascend should check that their work is well suited for it by using the following list.  Ideal workloads will exhibit one or more of the following characteristics:

    • Needs access to Ascend specific hardware (GPUs, or AMD)
    • Software:
      • Supports GPUs
      • Takes advantage of:
        • Long vector length
        • Higher core count
        • Improved memory bandwidth

    Applying for Access

    PIs of groups that would like to be considered for Ascend access should send the following in a email to OSC Help:

    • Name
    • Username
    • Project code (group)
    • Software/packages used on Ascend
    • Evidence of workload being well suited for Ascend
    Supercomputer: 

    Technical Specifications

    The following are technical specifications for Ascend.  

    Number of Nodes

    24 nodes

    Number of CPU Sockets

    48 (2 sockets/node)

    Number of CPU Cores

    2,304 (96 cores/node)

    Cores Per Node

    96 cores/node (88 usable cores/node)

    Internal Storage

    12.8 TB NVMe internal storage

    Compute CPU Specifications
    AMD EPYC 7643 (Milan) processors for compute
    • 2.3 GHz
    • 48 cores per processor
    Computer Server Specifications

    24 Dell XE8545 servers

    Accelerator Specifications

    4 NVIDIA A100 GPUs with 80GB memory each, supercharged by NVIDIA NVLink

    Number of Accelerator Nodes

    24 total

    Total Memory
    ~ 24 TB
    Physical Memory Per Node

    1 TB

    Physical Memory Per Core

    10.6 GB

    Interconnect

    Mellanox/NVIDA 200 Gbps HDR InfiniBand​

    Supercomputer: 

    Cardinal

    OSC's Cardinal cluster is slated to launch in 2024. 

     

    Detailed system specifications:

    • 378 Dell Nodes, 39,312 total cores, 128 GPUs 

    • Dense Compute: 326 Dell PowerEdge C6620 two-socket servers, each with: 

      • 2 Intel Xeon CPU Max 9470 (Sapphire Rapids, 52 cores [48 usable], 2.0 GHz) processors 

      • 128 GB HBM2e and 512 GB DDR5 memory 

      • 1.6 TB NVMe local storage 

      • NDR200 Infiniband 

    • GPU Compute: 32 Dell PowerEdge XE9640 two-socket servers, each with: 

      • 2 Intel Xeon Platinum 8470 (Sapphire Rapids, 52 cores [48 usable], 2.0 GHz) processors 

      • 1 TB DDR5 memory 

      • 4 NVIDIA H100 (Hopper) GPUs each with 94 GB HBM2e memory and NVIDIA NVLink 

      • 12.8 TB NVMe local storage 

      • Four NDR400 Infiniband HCAs supporting GPUDirect 

    • Analytics: 16 Dell PowerEdge R660 two-socket servers, each with: 

      • 2 Intel Xeon CPU Max 9470 (Sapphire Rapids, 52 cores [48 usable], 2.0 GHz) processors 

      • 128 GB HBM2e and 2 TB DDR5 memory 

      • 12.8 TB NVMe local storage 

      • NDR200 Infiniband 

    • Login nodes: 4 Dell PowerEdge R660 two-socket servers, each with: 

      • 2 Intel Xeon CPU Max 9470 (Sapphire Rapids, 52 cores [48 usable], 2.0 GHz) processors 

      • 128 GB HBM and 1 TB DDR5 memory 

      • 3.2 TB NVMe local storage 

      • NDR200 Infiniband  

      • IP address: TBD 

    • ~10.5 PF Theoretical system peak performance  

      • ~8 PetaFLOPs (GPU) 

      • ~2.5 PetaFLOPS (CPU) 

    • 9 Physical racks, plus Two Coolant Distribution Units (CDUs) providing direct-to-the-chip liquid cooling for all nodes 

    Service: 

    Technical Specifications

    The following are technical specifications for Cardinal.  

    Number of Nodes

    378 nodes

    Number of CPU Sockets

    756 (2 sockets/node for all nodes)

    Number of CPU Cores

    39,312

    Cores Per Node

    104 cores/node for all nodes (96 usable)

    Local Disk Space Per Node
    • 1.6 TB for compute nodes
    • 12.8 TB for GPU and Large mem nodes
    • 3.2 TB for login nodes
    Compute, Large Mem & Login Node CPU Specifications
    Intel Xeon CPU Max 9470 HBM2e (Sapphire Rapids)
    • 2.0 GHz
    • 52 cores per processor (48 usable)
    GPU Node CPU Specifications
    Intel Xeon Platinum 8470 (Sapphire Rapids)
    • 2.0 GHz
    • 52 cores per processor
    Server Specifications
    • 326 Dell PowerEdge C6620
    • 32 Dell PowerEdge XE9640 (GPU nodes)
    • 20 Dell PowerEdge R660 (largemem & login nodes)
    Accelerator Specifications

    NVIDIA H100 (Hopper) GPUs each with 96 GB HBM2e memory and NVIDIA NVLINK

    Number of Accelerator Nodes

    32 quad GPU nodes (4 GPUs per node)

    Total Memory

    ~281 TB (44 TB HBM, 237 TB DDR5)

    Memory Per Node
    • 128 GB HBM / 512 GB DDR5 (compute nodes)
    • 1 TB (GPU nodes)
    • 128 GB HBM / 2 TB DDR5 (large mem nodes)
    • 128 GB HBM / 1 TB DDR5 (login nodes)
    Memory Per Core
    • 1.2 GB HBM / 4.9 GB DDR5 (compute nodes)
    • 9.8 GB (GPU nodes)
    • 1.2 GB HBM / 19.7 GB DDR5 (large mem nodes)
    • 1.2 GB HBM / 9.8 GB DDR5 (login nodes)
    Interconnect
    • NDR200 Infiniband (200 Gbps) (compute, large mem, login nodes)
    • 4x NDR400 Infiniband (400 Gbps x 4) with GPUDirect, allowing non-blocking communication between up to 10 nodes (GPU nodes)
    Service: 

    Owens

    TIP: Remember to check the menu to the right of the page for related pages with more information about Owens' specifics.

    OSC's Owens cluster being installed in 2016 is a Dell-built, Intel® Xeon® processor-based supercomputer.

    Owens infographic,

    Hardware

    Owens_image

    Detailed system specifications:

    The memory information below is for total memory (RAM) on the node, however some amount of that is reserved for system use and not available to user processes.
    Please see owens batch limits for details on user available memory amount.
    • 824 Dell Nodes
    • Dense Compute
      • 648 compute nodes (Dell PowerEdge C6320 two-socket servers with Intel Xeon E5-2680 v4 (Broadwell, 14 cores, 2.40 GHz) processors, 128 GB memory)

    • GPU Compute

      • 160 ‘GPU ready’ compute nodes -- Dell PowerEdge R730 two-socket servers with Intel Xeon E5-2680 v4 (Broadwell, 14 cores, 2.40 GHz) processors, 128 GB memory

      • NVIDIA Tesla P100 (Pascal) GPUs -- 5.3 TF peak (double precision), 16 GB memory

    • Analytics

      • 16 huge memory nodes (Dell PowerEdge R930 four-socket server with Intel Xeon E5-4830 v3 (Haswell 12 core, 2.10 GHz) processors, 1,536 GB memory, 12 x 2 TB drives)

    • 23,392 total cores
      • 28 cores/node  & 128 GB of memory/node
    • Mellanox EDR (100 Gbps) Infiniband networking
    • Theoretical system peak performance
      • ~750 teraflops (CPU only)
    • 4 login nodes:
      • Intel Xeon E5-2680 (Broadwell) CPUs
      • 28 cores/node and 256 GB of memory/node
      • IP address: 192.148.247.[141-144]

    How to Connect

    • SSH Method

    To login to Owens at OSC, ssh to the following hostname:

    owens.osc.edu 
    

    You can either use an ssh client application or execute ssh on the command line in a terminal window as follows:

    ssh <username>@owens.osc.edu
    

    You may see warning message including SSH key fingerprint. Verify that the fingerprint in the message matches one of the SSH key fingerprint listed here, then type yes.

    From there, you are connected to Owens login node and have access to the compilers and other software development tools. You can run programs interactively or through batch requests. We use control groups on login nodes to keep the login nodes stable. Please use batch jobs for any compute-intensive or memory-intensive work. See the following sections for details.

    • OnDemand Method

    You can also login to Owens at OSC with our OnDemand tool. The first step is to login to OnDemand. Then once logged in you can access Owens by clicking on "Clusters", and then selecting ">_Owens Shell Access".

    Instructions on how to connect to OnDemand can be found at the OnDemand documention page.

    File Systems

    Owens accesses the same OSC mass storage environment as our other clusters. Therefore, users have the same home directory as on the old clusters. Full details of the storage environment are available in our storage environment guide.

    Software Environment

    The module system is used to manage the software environment on owens. Use  module load <package>  to add a software package to your environment. Use  module list  to see what modules are currently loaded and  module avail  to see the modules that are available to load. To search for modules that may not be visible due to dependencies or conflicts, use  module spider . By default, you will have the batch scheduling software modules, the Intel compiler and an appropriate version of mvapich2 loaded.

    You can keep up to on the software packages that have been made available on Owens by viewing the Software by System page and selecting the Owens system.

    Compiling Code to Use Advanced Vector Extensions (AVX2)

    The Haswell and Broadwell processors that make up Owens support the Advanced Vector Extensions (AVX2) instruction set, but you must set the correct compiler flags to take advantage of it. AVX2 has the potential to speed up your code by a factor of 4 or more, depending on the compiler and options you would otherwise use.

    In our experience, the Intel and PGI compilers do a much better job than the gnu compilers at optimizing HPC code.

    With the Intel compilers, use -xHost and -O2 or higher. With the gnu compilers, use -march=native and -O3 . The PGI compilers by default use the highest available instruction set, so no additional flags are necessary.

    This advice assumes that you are building and running your code on Owens. The executables will not be portable.  Of course, any highly optimized builds, such as those employing the options above, should be thoroughly validated for correctness.

    See the Owens Programming Environment page for details.

    Batch Specifics

    Refer to the documentation for our batch environment to understand how to use the batch system on OSC hardware. Some specifics you will need to know to create well-formed batch scripts:

    • Most compute nodes on Owens have 28 cores/processors per node.  Huge-memory (analytics) nodes have 48 cores/processors per node.
    • Jobs on Owens may request partial nodes.

    Using OSC Resources

    For more information about how to use OSC resources, please see our guide on batch processing at OSC. For specific information about modules and file storage, please see the Batch Execution Environment page.

    Supercomputer: 
    Service: 

    Technical Specifications

    The following are technical specifications for Owens.  

    Number of Nodes

    824 nodes

    Number of CPU Sockets

    1,648 (2 sockets/node)

    Number of CPU Cores

    23,392 (28 cores/node)

    Cores Per Node

    28 cores/node (48 cores/node for Huge Mem Nodes)

    Local Disk Space Per Node

    ~1,500GB in /tmp

    Compute CPU Specifications
    Intel Xeon E5-2680 v4 (Broadwell) for compute
    • 2.4 GHz 
    • 14 cores per processor
    Computer Server Specifications
    • 648 Dell PowerEdge C6320
    • 160 Dell PowerEdge R730 (for accelerator nodes)
    Accelerator Specifications

    NVIDIA P100 "Pascal" GPUs 16GB memory

    Number of Accelerator Nodes

    160 total

    Total Memory
    ~ 127 TB
    Memory Per Node

    128 GB (1.5 TB for Huge Mem Nodes)

    Memory Per Core

    4.5 GB (31 GB for Huge Mem)

    Interconnect

    Mellanox EDR Infiniband Networking (100Gbps)

    Login Specifications
    4 Intel Xeon E5-2680 (Broadwell) CPUs
    • 28 cores/node and 256GB of memory/node
    Special Nodes
    16 Huge Memory Nodes
    • Dell PowerEdge R930 
    • 4 Intel Xeon E5-4830 v3 (Haswell)
      • 12 Cores
      • 2.1 GHz
    • 48 cores (12 cores/CPU)
    • 1.5 TB Memory
    • 12 x 2 TB Drive (20TB usable)
    Supercomputer: 
    Service: 

    Environment changes in Slurm migration

    As we migrate to Slurm from Torque/Moab, there will be necessary software environment changes.

    Decommissioning old MVAPICH2 versions

    Old MVAPICH2 including mvapich2/2.1mvapich2/2.2 and its variants do not support Slurm very well due to its life span, so we will remove the following versions:

    • mvapich2/2.1
    • mvapich2/2.2, 2.2rc1, 2.2ddn1.3, 2.2ddn1.4, 2.2-debug, 2.2-gpu

    As a result, the following dependent software will not be available anymore.

    Unavailable Software Possible replacement
    amber/16 amber/18
    darshan/3.1.4 darshan/3.1.6
    darshan/3.1.5-pre1 darshan/3.1.6
    expresso/5.2.1 expresso/6.3
    expresso/6.1 expresso/6.3
    expresso/6.1.2 expresso/6.3
    fftw3/3.3.4 fftw3/3.3.5
    gamess/18Aug2016R1 gamess/30Sep2019R2
    gromacs/2016.4 gromacs/2018.2
    gromacs/5.1.2 gromacs/2018.2
    lammps/14May16 lammps/16Mar18
    lammps/31Mar17 lammps/16Mar18
    mumps/5.0.2 N/A (no current users)
    namd/2.11 namd/2.13
    nwchem/6.6 nwchem/6.8
    pnetcdf/1.7.0 pnetcdf/1.10.0
    siesta-par/4.0 siesta-par/4.0.2

    If you used one of the software listed above, we strongly recommend testing during the early user period. We listed a possible replacement version that is close to the unavailable version. However, if it is possible, we recommend using the most recent versions available. You can find the available versions by module spider {software name}. If you have any questions, please contact OSC Help.

    Miscellaneous cleanup on MPIs

    We clean up miscellaneous MPIs as we have a better and compatible version available. Since it has a compatible version, you should be able to use your applications without issues.

    Removed MPI versions Compatible MPI versions

    mvapich2/2.3b

    mvapich2/2.3rc1

    mvapich2/2.3rc2

    mvapich2/2.3

    mvapich2/2.3.3

    mvapich2/2.3b-gpu

    mvapich2/2.3rc1-gpu

    mvapich2/2.3rc2-gpu

    mvapich2/2.3-gpu

    mvapich2/2.3.1-gpu

    mvapich2-gdr/2.3.1, 2.3.2, 2.3.3

    mvapich2-gdr/2.3.4

    openmpi/1.10.5

    openmpi/1.10

    openmpi/1.10.7

    openmpi/1.10.7-hpcx

    openmpi/2.0

    openmpi/2.0.3

    openmpi/2.1.2

    openmpi/2.1.6

    openmpi/2.1.6-hpcx

    openmpi/4.0.2

    openmpi/4.0.2-hpcx

    openmpi/4.0.3

    openmpi/4.0.3-hpcx

    Software flag usage update for Licensed Software

    We have software flags required to use in job scripts for licensed software, such as ansys, abauqs, or schrodinger. With the slurm migration, we updated the syntax and added extra software flags.  It is very important everyone follow the procedure below. If you don't use the software flags properly, jobs submitted by others can be affected. 

    We require using software flags only for the demanding software and the software features in order to prevent job failures due to insufficient licenses. When you use the software flags, Slurm will record it on its license pool, so that other jobs will launch when there are enough licenses available. This will function correctly when everyone uses the software flag.

    During the early user period until Dec 15, 2020, the software flag system may not work correctly. This is because, during the test period, licenses will be used from two separate Owens systems. However, we recommend you to test your job scripts with the new software flags, so that you can use it without any issues after the slurm migration.

    The new syntax for software flags is

    #SBATCH -L {software flag}@osc:N

    where N is the requesting number of the licenses. If you need more than one software flags, you can use

    #SBATCH -L {software flag1}@osc:N,{software flag2}@osc:M

    For example, if you need 2 abaqus and 2 abaqusextended license features, then you can use

    $SBATCH -L abaqus@osc:2,abaqusextended@osc:2

    We have the full list of software associated with software flags in the table below.

      Software flag Note
    abaqus

    abaqus, abaquscae

     
    ansys ansys, ansyspar  
    comsol comsolscript  
    schrodinger epik, glide, ligprep, macromodel, qikprop  
    starccm starccm, starccmpar  
    stata stata  
    usearch usearch  
    ls-dyna, mpp-dyna lsdyna  
     
    Supercomputer: 
    Service: 

    Owens Programming Environment (PBS)

    This document is obsoleted and kept as a reference to previous Owens programming environment. Please refer to here for the latest version.

    Compilers

    C, C++ and Fortran are supported on the Owens cluster. Intel, PGI and GNU compiler suites are available. The Intel development tool chain is loaded by default. Compiler commands and recommended options for serial programs are listed in the table below. See also our compilation guide.

    The Haswell and Broadwell processors that make up Owens support the Advanced Vector Extensions (AVX2) instruction set, but you must set the correct compiler flags to take advantage of it. AVX2 has the potential to speed up your code by a factor of 4 or more, depending on the compiler and options you would otherwise use.

    In our experience, the Intel and PGI compilers do a much better job than the GNU compilers at optimizing HPC code.

    With the Intel compilers, use -xHost and -O2 or higher. With the GNU compilers, use -march=native and -O3. The PGI compilers by default use the highest available instruction set, so no additional flags are necessary.

    This advice assumes that you are building and running your code on Owens. The executables will not be portable.  Of course, any highly optimized builds, such as those employing the options above, should be thoroughly validated for correctness.

    LANGUAGE INTEL EXAMPLE PGI EXAMPLE GNU EXAMPLE
    C icc -O2 -xHost hello.c pgcc -fast hello.c gcc -O3 -march=native hello.c
    Fortran 90 ifort -O2 -xHost hello.f90 pgf90 -fast hello.f90 gfortran -O3 -march=native hello.f90
    C++ icpc -O2 -xHost hello.cpp pgc++ -fast hello.cpp g++ -O3 -march=native hello.cpp

    Parallel Programming

    MPI

    OSC systems use the MVAPICH2 implementation of the Message Passing Interface (MPI), optimized for the high-speed Infiniband interconnect. MPI is a standard library for performing parallel processing using a distributed-memory model. For more information on building your MPI codes, please visit the MPI Library documentation.

    Parallel programs are started with the mpiexec command. For example,

    mpiexec ./myprog

    The program to be run must either be in your path or have its path specified.

    The mpiexec command will normally spawn one MPI process per CPU core requested in a batch job. Use the -n and/or -ppn option to change that behavior.

    The table below shows some commonly used options. Use mpiexec -help for more information.

    MPIEXEC Option COMMENT
    -ppn 1 One process per node
    -ppn procs procs processes per node
    -n totalprocs
    -np totalprocs
    At most totalprocs processes per node
    -prepend-rank Prepend rank to output
    -help Get a list of available options

     

    Caution: There are many variations on mpiexec and mpiexec.hydra. Information found on non-OSC websites may not be applicable to our installation.
    The information above applies to the MVAPICH2 and IntelMPI installations at OSC. See the OpenMPI software page for mpiexec usage with OpenMPI.

    OpenMP

    The Intel, PGI and GNU compilers understand the OpenMP set of directives, which support multithreaded programming. For more information on building OpenMP codes on OSC systems, please visit the OpenMP documentation.

    GPU Programming

    160 Nvidia P100 GPUs are available on Owens.  Please visit our GPU documentation.

    Supercomputer: 
    Service: 
    Technologies: 

    Owens Programming Environment

    Compilers

    C, C++ and Fortran are supported on the Owens cluster. Intel, PGI and GNU compiler suites are available. The Intel development tool chain is loaded by default. Compiler commands and recommended options for serial programs are listed in the table below. See also our compilation guide.

    The Haswell and Broadwell processors that make up Owens support the Advanced Vector Extensions (AVX2) instruction set, but you must set the correct compiler flags to take advantage of it. AVX2 has the potential to speed up your code by a factor of 4 or more, depending on the compiler and options you would otherwise use.

    In our experience, the Intel and PGI compilers do a much better job than the GNU compilers at optimizing HPC code.

    With the Intel compilers, use -xHost and -O2 or higher. With the GNU compilers, use -march=native and -O3. The PGI compilers by default use the highest available instruction set, so no additional flags are necessary.

    This advice assumes that you are building and running your code on Owens. The executables will not be portable.  Of course, any highly optimized builds, such as those employing the options above, should be thoroughly validated for correctness.

    LANGUAGE INTEL GNU PGI
    C icc -O2 -xHost hello.c gcc -O3 -march=native hello.c pgcc -fast hello.c
    Fortran 77/90 ifort -O2 -xHost hello.F gfortran -O3 -march=native hello.F pgfortran -fast hello.F
    C++ icpc -O2 -xHost hello.cpp g++ -O3 -march=native hello.cpp pgc++ -fast hello.cpp

    Parallel Programming

    MPI

    OSC systems use the MVAPICH2 implementation of the Message Passing Interface (MPI), optimized for the high-speed Infiniband interconnect. MPI is a standard library for performing parallel processing using a distributed-memory model. For more information on building your MPI codes, please visit the MPI Library documentation.

    MPI programs are started with the srun command. For example,

    #!/bin/bash
    #SBATCH --nodes=2
    
    srun [ options ] mpi_prog
    Note: the program to be run must either be in your path or have its path specified.

    The srun command will normally spawn one MPI process per task requested in a Slurm batch job. Use the -n ntasks and/or --ntasks-per-node=n option to change that behavior. For example,

    #!/bin/bash
    #SBATCH --nodes=2
    
    # Use the maximum number of CPUs of two nodes
    srun ./mpi_prog
    
    # Run 8 processes per node
    srun -n 16 --ntasks-per-node=8  ./mpi_prog
    

    The table below shows some commonly used options. Use srun -help for more information.

    OPTION COMMENT
    -n, --ntasks=ntasks total number of tasks to run
    --ntasks-per-node=n number of tasks to invoke on each node
    -help Get a list of available options
    Note: The information above applies to the MVAPICH2, Intel MPI and OpenMPI installations at OSC. 
    Caution: mpiexec or mpiexec.hydra is still supported with Intel MPI and OpenMPI, but it is not fully compatible in our Slurm environment. We recommand using srun in any circumstances.

    OpenMP

    The Intel, GNU and PGI compilers understand the OpenMP set of directives, which support multithreaded programming. For more information on building OpenMP codes on OSC systems, please visit the OpenMP documentation.

    An OpenMP program by default will use a number of threads equal to the number of CPUs requested in a Slurm batch job. To use a different number of threads, set the environment variable OMP_NUM_THREADS. For example,

    #!/bin/bash
    #SBATCH --ntasks=8
    
    # Run 8 threads
    ./omp_prog
    
    # Run 4 threads
    export OMP_NUM_THREADS=4
    ./omp_prog
    

    To run a OpenMP job on an exclusive node:

    #!/bin/bash
    #SBATCH --nodes=1
    #SBATCH --exclusive
    
    export OMP_NUM_THREADS=$SLURM_CPUS_ON_NODE
    ./omp_prog
    

    Interactive job only

    See the section on interactive batch in batch job submission for details on submitting an interactive job to the cluster.

    Hybrid (MPI + OpenMP)

    An example of running a job for hybrid code:

    #!/bin/bash
    #SBATCH --nodes=2
    
    # Run 4 MPI processes on each node and 7 OpenMP threads spawned from a MPI process
    export OMP_NUM_THREADS=7
    srun -n 8 -c 7 --ntasks-per-node=4 ./hybrid_prog
    

     

    Tuning Parallel Program Performance: Process/Thread Placement

    To get the maximum performance, it is important to make sure that processes/threads are located as close as possible to their data, and as close as possible to each other if they need to work on the same piece of data, with given the arrangement of node, sockets, and cores, with different access to RAM and caches. 

    While cache and memory contention between threads/processes are an issue, it is best to use scatter distribution for code. 

    Processes and threads are placed differently depending on the computing resources you requste and the compiler and MPI implementation used to compile your code. For the former, see the above examples to learn how to run a job on exclusive nodes. For the latter, this section summarizes the default behavior and how to modify placement.

    OpenMP only

    For all three compilers (Intel, GNU, PGI), purely threaded codes do not bind to particular CPU cores by default. In other words, it is possible that multiple threads are bound to the same CPU core

    The following table describes how to modify the default placements for pure threaded code:

    DISTRIBUTION Compact Scatter/Cyclic
    DESCRIPTION Place threads as closely as possible on sockets Distribute threads as evenly as possible across sockets
    INTEL KMP_AFFINITY=compact KMP_AFFINITY=scatter
    GNU OMP_PLACES=sockets[1] OMP_PROC_BIND=spread/close
    PGI[2]

    MP_BIND=yes
    MP_BLIST="$(seq -s, 0 2 27),$(seq -s, 1 2 27)" 

    MP_BIND=yes
    1. Threads in the same socket might be bound to the same CPU core.
    2. PGI LLVM-backend (version 19.1 and later) does not support thread/processors affinity on NUMA architecture. To enable this feature, compile threaded code with --Mnollvm to use proprietary backend.

    MPI Only

    For MPI-only codes, MVAPICH2 first binds as many processes as possible on one socket, then allocates the remaining processes on the second socket so that consecutive tasks are near each other.  Intel MPI and OpenMPI alternately bind processes on socket 1, socket 2, socket 1, socket 2 etc, as cyclic distribution.

    For process distribution across nodes, all MPIs first bind as many processes as possible on one node, then allocates the remaining processes on the second node. 

    The following table describe how to modify the default placements on a single node for MPI-only code with the command srun:

    DISTRIBUTION
    (single node)
    Compact Scatter/Cyclic
    DESCRIPTION Place processs as closely as possible on sockets Distribute process as evenly as possible across sockets
    MVAPICH2[1] Default MV2_CPU_BINDING_POLICY=scatter
    INTEL MPI srun --cpu-bind="map_cpu:$(seq -s, 0 2 27),$(seq -s, 1 2 27)" Default
    OPENMPI srun --cpu-bind="map_cpu:$(seq -s, 0 2 27),$(seq -s, 1 2 27)" Default
    1. MV2_CPU_BINDING_POLICY will not work if MV2_ENABLE_AFFINITY=0 is set.

    To distribute processes evenly across nodes, please set SLURM_DISTRIBUTION=cyclic.

    Hybrid (MPI + OpenMP)

    For Hybrid codes, each MPI process is allocated  OMP_NUM_THREADS cores and the threads of each process are bound to those cores. All MPI processes (as well as the threads bound to the process) behave as we describe in the previous sections. It means the threads spawned from a MPI process might be bound to the same core. To change the default process/thread placmements, please refer to the tables above. 

    Summary

    The above tables list the most commonly used settings for process/thread placement. Some compilers and Intel libraries may have additional options for process and thread placement beyond those mentioned on this page. For more information on a specific compiler/library, check the more detailed documentation for that library.

    GPU Programming

    160 Nvidia P100 GPUs are available on Owens.  Please visit our GPU documentation.

    Reference

    Supercomputer: 
    Service: 
    Technologies: 

    Citation

    For more information about citations of OSC, visit https://www.osc.edu/citation.

    To cite Owens, please use the following Archival Resource Key:

    ark:/19495/hpc6h5b1

    Please adjust this citation to fit the citation style guidelines required.

    Ohio Supercomputer Center. 2016. Owens Supercomputer. Columbus, OH: Ohio Supercomputer Center. http://osc.edu/ark:19495/hpc6h5b1

    Here is the citation in BibTeX format:

    @misc{Owens2016,
    ark = {ark:/19495/hpc93fc8},
    url = {http://osc.edu/ark:/19495/hpc6h5b1},
    year  = {2016},
    author = {Ohio Supercomputer Center},
    title = {Owens Supercomputer}
    }
    

    And in EndNote format:

    %0 Generic
    %T Owens Supercomputer
    %A Ohio Supercomputer Center
    %R ark:/19495/hpc6h5b1
    %U http://osc.edu/ark:/19495/hpc6h5b1
    %D 2016
    

    Here is an .ris file to better suit your needs. Please change the import option to .ris.

    Documentation Attachment: 
    Supercomputer: 
    Service: 

    Owens SSH key fingerprints

    These are the public key fingerprints for Owens:
    owens: ssh_host_rsa_key.pub = 18:68:d4:b0:44:a8:e2:74:59:cc:c8:e3:3a:fa:a5:3f
    owens: ssh_host_ed25519_key.pub = 1c:3d:f9:99:79:06:ac:6e:3a:4b:26:81:69:1a:ce:83
    owens: ssh_host_ecdsa_key.pub = d6:92:d1:b0:eb:bc:18:86:0c:df:c5:48:29:71:24:af


    These are the SHA256 hashes:​
    owens: ssh_host_rsa_key.pub = SHA256:vYIOstM2e8xp7WDy5Dua1pt/FxmMJEsHtubqEowOaxo
    owens: ssh_host_ed25519_key.pub = SHA256:FSb9ZxUoj5biXhAX85tcJ/+OmTnyFenaSy5ynkRIgV8
    owens: ssh_host_ecdsa_key.pub = SHA256:+fqAIqaMW/DUJDB0v/FTxMT9rkbvi/qVdMKVROHmAP4

    Supercomputer: 

    Batch Limit Rules

    Memory Limit:

    A small portion of the total physical memory on each node is reserved for distributed processes.  The actual physical memory available to user jobs is tabulated below.

    Summary

    Node type default and max memory per core max memory per node
    regular compute 4.214 GB 117 GB
    huge memory 31.104 GB 1492 GB
    gpu 4.214 GB 117 GB
    A job may request more than the max memory per core, but the job will be allocated more cores to satisfy the memory request instead of just more memory.
    e.g. The following slurm directives will actually grant this job 3 cores, with 10 GB of memory
    (since 2 cores * 4.2 GB = 8.4 GB doesn't satisfy the memory request).
    #SBATCH --ntask=2
     #SBATCH --mem=10g

    It is recommended to let the default memory apply unless more control over memory is needed.
    Note that if an entire node is requested, then the job is automatically granted the entire node's main memory. On the other hand, if a partial node is requested, then memory is granted based on the default memory per core.

    See a more detailed explanation below.

    Regular Dense Compute Node

    On Owens, it equates to 4,315 MB/core or 120,820 MB/node (117.98 GB/node) for the regular dense compute node. 

    If your job requests less than a full node ( ntasks< 28 ), it may be scheduled on a node with other running jobs. In this case, your job is entitled to a memory allocation proportional to the number of cores requested (4315 MB/core).  For example, without any memory request ( mem=XXMB ), a job that requests  --nodes=1 --ntasks=1 will be assigned one core and should use no more than 4315 MB of RAM, a job that requests  --nodes=1 --ntasks=3  will be assigned 3 cores and should use no more than 3*4315 MB of RAM, and a job that requests --nodes=1 --ntasks=28 will be assigned the whole node (28 cores) with 118 GB of RAM.  

    Here is some information when you include memory request (mem=XX ) in your job. A job that requests  --nodes=1 --ntasks=1 --mem=12GB  will be assigned three cores and have access to 12 GB of RAM, and charged for 3 cores worth of usage (in other ways, the request --ntasks is ingored).  A job that requests  --nodes=1 --ntasks=5 --mem=12GB  will be assigned 5 cores but have access to only 12 GB of RAM, and charged for 5 cores worth of usage. 

    A multi-node job ( nodes>1 ) will be assigned the entire nodes with 118 GB/node and charged for the entire nodes regardless of ppn request. For example, a job that requests  --nodes=10 --ntasks-per-node=1 will be charged for 10 whole nodes (28 cores/node*10 nodes, which is 280 cores worth of usage).  

    Huge Memory Node

    Beginning on Tuesday, March 10th, 2020, users are able to run jobs using less than a full huge memory node. Please read the following instructions carefully before requesting a huge memory node on Owens. 

    On Owens, it equates to 31,850 MB/core or 1,528,800 MB/node (1,492.96 GB/node) for a huge memory node.

    To request no more than a full huge memory node, you have two options:

    • The first is to specify the memory request between 120,832 MB (118 GB) and 1,528,800 MB (1,492.96 GB), i.e., 120832MB <= mem <=1528800MB ( 118GB <= mem < 1493GB). Note: you can only use interger for request
    • The other option is to use the combination of --ntasks-per-node and --partition, like --ntasks-per-node=4 --partition=hugemem . When no memory is specified for the huge memory node, your job is entitled to a memory allocation proportional to the number of cores requested (31,850MB/core). Note, --ntasks-per-node should be no less than 4 and no more than 48

    To manage and monitor your memory usage, please refer to Out-of-Memory (OOM) or Excessive Memory Usage.

    GPU Jobs

    There is only one GPU per GPU node on Owens.

    For serial jobs, we allow node sharing on GPU nodes so a job may request any number of cores (up to 28)

    (--nodes=1 --ntasks=XX --gpus-per-node=1)

    For parallel jobs (n>1), we do not allow node sharing. 

    See this GPU computing page for more information. 

    Partition time and job size limits

    Here are the partitions available on Owens:

    Name Max time limit
    (dd-hh:mm:ss)
    Min job size Max job size notes

    serial

    7-00:00:00

    1 core

    1 node

     

    longserial

    14-00:00:0

    1 core

    1 node

    • Restricted access (contact OSC Help if you need access)

    parallel

    4-00:00:00

    2 nodes

    81 nodes

     

    gpuserial 7-00:00:00 1 core 1 node  
    gpuparallel 4-00:00:00 2 nodes 8 nodes  

    hugemem

    7-00:00:00

    1 core

    1 node

     
    hugemem-parallel 4-00:00:00 2 nodes 16 nodes
    • Restricted access (contact OSC Help if you need access)
    debug 1:00:00 1 core 2 nodes
    • For small interactive and test jobs
    gpudebug 1:00:00 1 core 2 nodes
    • For small interactive and test GPU jobs
    To specify a partition for a job, either add the flag --partition=<partition-name> to the sbatch command at submission time or add this line to the job script:
    #SBATCH --paritition=<partition-name>

    To access one of the restricted queues, please contact OSC Help. Generally, access will only be granted to these queues if the performance of the job cannot be improved, and job size cannot be reduced by splitting or checkpointing the job.

    Job/Core Limits

      Max Running Job Limit  Max Core/Processor Limit
      For all types GPU jobs Regular debug jobs GPU debug jobs For all types
    Individual User 384 132 4 4 3080
    Project/Group 576 132 n/a n/a 3080

    An individual user can have up to the max concurrently running jobs and/or up to the max processors/cores in use.

    However, among all the users in a particular group/project, they can have up to the max concurrently running jobs and/or up to the max processors/cores in use.

    A user may have no more than 1000 jobs submitted to both the parallel and serial job queue separately.
    Supercomputer: 
    Service: 

    Pitzer

    TIP: Remember to check the menu to the right of the page for related pages with more information about Pitzer's specifics.

    OSC's original Pitzer cluster was installed in late 2018 and is a Dell-built, Intel® Xeon® 'Skylake' processor-based supercomputer with 260 nodes.

    In September 2020, OSC installed additional 398 Intel® Xeon® 'Cascade Lake' processor-based nodes as part of a Pitzer Expansion cluster. 

    pitzer-new.jpg

    Hardware

    Photo of Pitzer Cluster

    Detailed system specifications:

      Deployed in 2018 Deployed in 2020 Total
    Total Compute Nodes 260 Dell nodes 398 Dell nodes 658 Dell nodes
    Total CPU Cores 10,560 total cores 19,104 total cores 29,664 total cores
    Standard Dense Compute Nodes

    224 nodes​​​​​​

    • Dual Intel Xeon 6148s Skylakes
    • 40 cores per node @ 2.4 GHz
    • 192 GB memory
    • 1 TB disk space
    340 nodes
    • Dual Intel Xeon 8268s Cascade Lakes
    • 48 cores per node @ 2.9 GHz
    • 192 GB memory 
    • 1 TB disk space
    564 nodes
    Dual GPU Compute Nodes 32 nodes
    • Dual Intel Xeon 6148s
    • Dual NVIDIA Volta V100 w/ 16 GB GPU memory
    • 40 cores per node @ 2.4 GHz
    • 384 GB memory
    • 1 TB disk space
    42 nodes
    • Dual Intel Xeon 8268s 
    • Dual NVIDIA Volta V100 w/32 GB GPU memory
    • 48 cores per node @ 2.9 GHz
    • 384 GB memory
    • 1 TB disk space
    74 dual GPU nodes
    Quad GPU Compute Nodes N/A 4 nodes 
    • Dual Intel Xeon 8260s Cascade Lakes
    • Quad NVIDIA Volta V100s w/32 GB GPU memory and NVLink
    • 48 cores per node @ 2.4 GHz
    • 768 GB memory
    • 4 TB disk space
    4 quad GPU nodes
    Large Memory Compute Nodes 4 nodes
    • Quad Processor Intel Xeon 6148 Skylakes
    • 80 cores per node @ 2.4 GHz
    • 3 TB memory
    • 1 TB disk space
    12 nodes
    • Dual Intel Xeon 8268 Cascade Lakes
    • 48 cores per node @ 2.9 GHz
    • 768 GB memory
    • 0.5 TB disk space
    16 nodes
    Interactive Login Nodes

    4 nodes

    • Dual Intel Xeon 6148s
    • 368 GB memory
    • IP address: 192.148.247.[176-179]
    4 nodes
    InfiniBand High-Speed Network Mellanox EDR (100 Gbps) Infiniband networking Mellanox EDR (100 Gbps) Infiniband networking  
    Theoretical Peak Performance

    ~850 TFLOPS (CPU only)

    ~450 TFLOPS (GPU only)

    ~1300 TFLOPS (total)

    ~1900 TFLOPS (CPU only)

    ~700 TFLOPS (GPU only)

    ~2600 TFLOPS (total)

    ~2750 TFLOPS (CPU only)

    ~1150 TFLOPS (GPU only)

    ~3900 TFLOPS (total)

    How to Connect

    • SSH Method

    To login to Pitzer at OSC, ssh to the following hostname:

    pitzer.osc.edu 
    

    You can either use an ssh client application or execute ssh on the command line in a terminal window as follows:

    ssh <username>@pitzer.osc.edu
    

    You may see a warning message including SSH key fingerprint. Verify that the fingerprint in the message matches one of the SSH key fingerprints listed here, then type yes.

    From there, you are connected to the Pitzer login node and have access to the compilers and other software development tools. You can run programs interactively or through batch requests. We use control groups on login nodes to keep the login nodes stable. Please use batch jobs for any compute-intensive or memory-intensive work. See the following sections for details.

    • OnDemand Method

    You can also login to Pitzer at OSC with our OnDemand tool. The first step is to log into OnDemand. Then once logged in you can access Pitzer by clicking on "Clusters", and then selecting ">_Pitzer Shell Access".

    Instructions on how to connect to OnDemand can be found at the OnDemand documentation page.

    File Systems

    Pitzer accesses the same OSC mass storage environment as our other clusters. Therefore, users have the same home directory as on the old clusters. Full details of the storage environment are available in our storage environment guide.

    Software Environment

    The module system on Pitzer is the same as on the Owens and Ruby systems. Use  module load <package>  to add a software package to your environment. Use  module list  to see what modules are currently loaded and  module avail  to see the modules that are available to load. To search for modules that may not be visible due to dependencies or conflicts, use  module spider . By default, you will have the batch scheduling software modules, the Intel compiler, and an appropriate version of mvapich2 loaded.

    You can keep up to the software packages that have been made available on Pitzer by viewing the Software by System page and selecting the Pitzer system.

    Compiling Code to Use Advanced Vector Extensions (AVX2)

    The Skylake processors that make Pitzer support the Advanced Vector Extensions (AVX2) instruction set, but you must set the correct compiler flags to take advantage of it. AVX2 has the potential to speed up your code by a factor of 4 or more, depending on the compiler and options you would otherwise use.

    In our experience, the Intel and PGI compilers do a much better job than the gnu compilers at optimizing HPC code.

    With the Intel compilers, use -xHost and -O2 or higher. With the gnu compilers, use -march=native and -O3 . The PGI compilers by default use the highest available instruction set, so no additional flags are necessary.

    This advice assumes that you are building and running your code on Pitzer. The executables will not be portable.  Of course, any highly optimized builds, such as those employing the options above, should be thoroughly validated for correctness.

    See the Pitzer Programming Environment page for details.

    Batch Specifics

    On September 22, 2020, OSC switches to Slurm for job scheduling and resource management on the Pitzer Cluster.

    Refer to this Slurm migration page to understand how to use Slurm on the Pitzer cluster. Some specifics you will need to know to create well-formed batch scripts:

    • OSC enables PBS compatibility layer provided by Slurm such that PBS batch scripts that used to work in the previous Torque/Moab environment mostly still work in Slurm. 
    • Pitzer is a heterogeneous system with mixed types of CPUs after the expansion as shown in the above table. Please be cautious when requesting resources on Pitzer and check this page for more detailed discussions
    • Jobs on Pitzer may request partial nodes.  

    Using OSC Resources

    For more information about how to use OSC resources, please see our guide on batch processing at OSC and Slurm migration. For specific information about modules and file storage, please see the Batch Execution Environment page.

    Technical Specifications

    Login Specifications
    4 Intel Xeon Gold 6148 (Skylake) CPUs
    • 40 cores/node and 384 GB of memory/node

    Technical specifications for 2018 Pitzer:  

    Number of Nodes

    260 nodes

    Number of CPU Sockets

    528 (2 sockets/node for standard node)

    Number of CPU Cores

    10,560 (40 cores/node for standard node)

    Cores Per Node

    40 cores/node (80 cores/node for Huge Mem Nodes)

    Local Disk Space Per Node

    850 GB in /tmp

    Compute CPU Specifications
    Intel Xeon Gold 6148 (Skylake) for compute
    • 2.4 GHz 
    • 20 cores per processor
    Computer Server Specifications
    • 224 Dell PowerEdge C6420
    • 32 Dell PowerEdge R740 (for accelerator nodes)
    • 4 Dell PowerEdge R940
    Accelerator Specifications

    NVIDIA V100 "Volta" GPUs 16GB memory

    Number of Accelerator Nodes

    32 total (2 GPUs per node)

    Total Memory

    ~67 TB

    Memory Per Node
    • 192 GB for standard nodes
    • 384 GB for accelerator nodes
    • 3 TB for Huge Mem Nodes
    Memory Per Core
    • 4.8 GB for standard nodes
    • 9.6 GB for accelerator nodes
    • 76.8 GB for Huge Mem
    Interconnect

    Mellanox EDR Infiniband Networking (100Gbps)

      Special Nodes
      4 Huge Memory Nodes
      • Dell PowerEdge R940 
      • 4 Intel Xeon Gold 6148 (Skylake)
        • 20 Cores
        • 2.4 GHz
      • 80 cores (20 cores/CPU)
      • 3 TB Memory
      • 2x Mirror 1 TB Drive (1 TB usable)

       

      Technical specifications for 2020 Pitzer:

      Number of Nodes

      398 nodes

      Number of CPU Sockets

      796 (2 sockets/node for all nodes)

      Number of CPU Cores

      19,104 (48 cores/node for all nodes)

      Cores Per Node

      48 cores/node for all nodes

      Local Disk Space Per Node
      • 1 TB for most nodes
      • 4 TB for quad GPU
      • 0.5 TB for large mem
      Compute CPU Specifications
      Intel Xeon 8268s Cascade Lakes for most compute
      • 2.9 GHz 
      • 24 cores per processor
      Computer Server Specifications
      • 352 Dell PowerEdge C6420
      • 42 Dell PowerEdge R740 (for dual GPU nodes)
      • 4 Dell Poweredge c4140 (for quad GPU nodes)
      Accelerator Specifications
      • NVIDIA V100 "Volta" GPUs 32GB memory for dual GPU
      • NVIDIA V100 "Volta" GPUs 32GB memory and NVLink for quad GPU
      Number of Accelerator Nodes
      • 42 dual GPU nodes (2 GPUs per node)
      • 4 quad GPU nodes (4 GPUs per node)
      Total Memory

      ~95 TB

      Memory Per Node
      • 192 GB for standard nodes
      • 384 GB for dual GPU nodes
      • 768 GB for quad and Large Mem Nodes
      Memory Per Core
      • 4.0 GB for standard nodes
      • 8.0 GB for dual GPU nodes
      • 16.0 GB for quad and Large Mem Nodes
      Interconnect

      Mellanox EDR Infiniband Networking (100Gbps)

      Special Nodes
      4 quad GPU Nodes
      • Dual Intel Xeon 8260s Cascade Lakes
      • Quad NVIDIA Volta V100s w/32GB GPU memory and NVLink
      • 48 cores per node @ 2.4GHz
      • 768GB memory
      • 4 TB disk space
      12 Large Memory Nodes
      • Dual Intel Xeon 8268 Cascade Lakes
      • 48 cores per node @ 2.9GHz
      • 768GB memory
      • 0.5 TB disk space
      Supercomputer: 

      Pitzer Programming Environment (PBS)

      This document is obsoleted and kept as a reference to previous Pitzer programming environment. Please refer to here for the latest version.

      Compilers

      C, C++ and Fortran are supported on the Pitzer cluster. Intel, PGI and GNU compiler suites are available. The Intel development tool chain is loaded by default. Compiler commands and recommended options for serial programs are listed in the table below. See also our compilation guide.

      The Skylake processors that make up Pitzer support the Advanced Vector Extensions (AVX512) instruction set, but you must set the correct compiler flags to take advantage of it. AVX512 has the potential to speed up your code by a factor of 8 or more, depending on the compiler and options you would otherwise use. However, bare in mind that clock speeds decrease as the level of the instruction set increases. So, if your code does not benefit from vectorization it may be beneficial to use a lower instruction set.

      In our experience, the Intel and PGI compilers do a much better job than the GNU compilers at optimizing HPC code.

      With the Intel compilers, use -xHost and -O2 or higher. With the GNU compilers, use -march=native and -O3. The PGI compilers by default use the highest available instruction set, so no additional flags are necessary.

      This advice assumes that you are building and running your code on Owens. The executables will not be portable.  Of course, any highly optimized builds, such as those employing the options above, should be thoroughly validated for correctness.

      LANGUAGE INTEL EXAMPLE PGI EXAMPLE GNU EXAMPLE
      C icc -O2 -xHost hello.c pgcc -fast hello.c gcc -O3 -march=native hello.c
      Fortran 90 ifort -O2 -xHost hello.f90 pgf90 -fast hello.f90 gfortran -O3 -march=native hello.f90
      C++ icpc -O2 -xHost hello.cpp pgc++ -fast hello.cpp g++ -O3 -march=native hello.cpp

      Parallel Programming

      MPI

      OSC systems use the MVAPICH2 implementation of the Message Passing Interface (MPI), optimized for the high-speed Infiniband interconnect. MPI is a standard library for performing parallel processing using a distributed-memory model. For more information on building your MPI codes, please visit the MPI Library documentation.

      Parallel programs are started with the mpiexec command. For example,

      mpiexec ./myprog

      The program to be run must either be in your path or have its path specified.

      The mpiexec command will normally spawn one MPI process per CPU core requested in a batch job. Use the -n and/or -ppn option to change that behavior.

      The table below shows some commonly used options. Use mpiexec -help for more information.

      MPIEXEC OPTION COMMENT
      -ppn 1 One process per node
      -ppn procs procs processes per node
      -n totalprocs
      -np totalprocs
      At most totalprocs processes per node
      -prepend-rank Prepend rank to output
      -help Get a list of available options

       

      Caution: There are many variations on mpiexec and mpiexec.hydra. Information found on non-OSC websites may not be applicable to our installation.
      The information above applies to the MVAPICH2 and IntelMPI installations at OSC. See the OpenMPI software page for mpiexec usage with OpenMPI.

      OpenMP

      The Intel, PGI and GNU compilers understand the OpenMP set of directives, which support multithreaded programming. For more information on building OpenMP codes on OSC systems, please visit the OpenMP documentation.

       

      Process/Thread placement

      Processes and threads are placed differently depending on the compiler and MPI implementation used to compile your code. This section summarizes the default behavior and how to modify placement.

      For all three compilers (Intel, GNU, PGI), purely threaded codes do not bind to particular cores by default.

      For MPI-only codes, Intel MPI first binds the first half of processes to one socket, and then second half to the second socket so that consecutive tasks are located near each other. MVAPICH2 first binds as many processes as possible on one socket, then allocates the remaining processes on the second socket so that consecutive tasks are near each other. OpenMPI alternately binds processes on socket 1, socket 2, socket 1, socket 2, etc, with no particular order for the core id.

      For Hybrid codes, Intel MPI first binds the first half of processes to one socket, and then second half to the second socket so that consecutive tasks are located near each other. Each process is allocated ${OMP_NUM_THREADS} cores and the threads of each process are bound to those cores. MVAPICH2 allocates ${OMP_NUM_THREADS} cores for each process and each thread of a process is placed on a separate core. By default, OpenMPI  behaves the same for hybrid codes as it does for MPI-only codes, allocating a single core for each process and all threads of that process.

      The following tables describe how to modify the default placements for each type of code.

      OpenMP options:

      Option Intel GNU Pgi description
      Scatter KMP_AFFINITY=scatter OMP_PLACES=cores OMP_PROC_BIND=close/spread MP_BIND=yes Distribute threads as evenly as possible across system
      Compact KMP_AFFINITY=compact OMP_PLACES=sockets MP_BIND=yes MP_BLIST="0,2,4,6,8,10,1,3,5,7,9" Place threads as closely as possible on system

       

      MPI options:

      OPTION INTEL MVAPICh2 openmpi DESCRIPTION
      Scatter I_MPI_PIN_DOMAIN=core I_MPI_PIN_ORDER=scatter MV2_CPU_BINDING_POLICY=scatter -map-by core --rank-by socket:span Distribute processes as evenly as possible across system
      Compact I_MPI_PIN_DOMAIN=core I_MPI_PIN_ORDER=compact MV2_CPU_BINDING_POLICY=bunch -map-by core

      Distribute processes as closely as possible on system

       

      Hybrid MPI+OpenMP options (combine with options from OpenMP table for thread affinity within cores allocated to each process):

      OPTION INTEL MVAPICH2 OPENMPI DESCRIPTION
      Scatter I_MPI_PIN_DOMAIN=omp I_MPI_PIN_ORDER=scatter MV2_CPU_BINDING_POLICY=hybrid MV2_HYBRID_BINDING_POLICY=linear -map-by node:PE=$OMP_NUM_THREADS --bind-to core --rank-by socket:span Distrubute processes as evenly as possible across system ($OMP_NUM_THREADS cores per process)
      Compact I_MPI_PIN_DOMAIN=omp I_MPI_PIN_ORDER=compact MV2_CPU_BINDING_POLICY=hybrid MV2_HYBRID_BINDING_POLICY=spread -map-by node:PE=$OMP_NUM_THREADS --bind-to core Distribute processes as closely as possible on system ($OMP_NUM_THREADS cores per process)

       

       

      The above tables list the most commonly used settings for process/thread placement. Some compilers and Intel libraries may have additional options for process and thread placement beyond those mentioned on this page. For more information on a specific compiler/library, check the more detailed documentation for that library.

      GPU Programming

      64 Nvidia V100 GPUs are available on Pitzer.  Please visit our GPU documentation.

       
       
       
      Supercomputer: 

      Pitzer Programming Environment

      Compilers

      C, C++ and Fortran are supported on the Pitzer cluster. Intel, PGI and GNU compiler suites are available. The Intel development tool chain is loaded by default. Compiler commands and recommended options for serial programs are listed in the table below. See also our compilation guide.

      The Skylake and Cascade Lake processors that make up Pitzer support the Advanced Vector Extensions (AVX512) instruction set, but you must set the correct compiler flags to take advantage of it. AVX512 has the potential to speed up your code by a factor of 8 or more, depending on the compiler and options you would otherwise use. However, bare in mind that clock speeds decrease as the level of the instruction set increases. So, if your code does not benefit from vectorization it may be beneficial to use a lower instruction set.

      In our experience, the Intel compiler usually does the best job of optimizing numerical codes and we recommend that you give it a try if you’ve been using another compiler.

      With the Intel compilers, use -xHost and -O2 or higher. With the GNU compilers, use -march=native and -O3. The PGI compilers by default use the highest available instruction set, so no additional flags are necessary.

      This advice assumes that you are building and running your code on Owens. The executables will not be portable.  Of course, any highly optimized builds, such as those employing the options above, should be thoroughly validated for correctness.

      LANGUAGE INTEL GNU PGI
      C icc -O2 -xHost hello.c gcc -O3 -march=native hello.c pgcc -fast hello.c
      Fortran 77/90 ifort -O2 -xHost hello.F gfortran -O3 -march=native hello.F pgfortran -fast hello.F
      C++ icpc -O2 -xHost hello.cpp g++ -O3 -march=native hello.cpp pgc++ -fast hello.cpp

      Parallel Programming

      MPI

      OSC systems use the MVAPICH2 implementation of the Message Passing Interface (MPI), optimized for the high-speed Infiniband interconnect. MPI is a standard library for performing parallel processing using a distributed-memory model. For more information on building your MPI codes, please visit the MPI Library documentation.

      MPI programs are started with the srun command. For example,

      #!/bin/bash
      #SBATCH --nodes=2
      
      srun [ options ] mpi_prog
      Note: the program to be run must either be in your path or have its path specified.

      The srun command will normally spawn one MPI process per task requested in a Slurm batch job. Use the -n ntasks and/or --ntasks-per-node=n option to change that behavior. For example,

      #!/bin/bash
      #SBATCH --nodes=2
      
      # Use the maximum number of CPUs of two nodes
      srun ./mpi_prog
      
      # Run 8 processes per node
      srun -n 16 --ntasks-per-node=8  ./mpi_prog
      

      The table below shows some commonly used options. Use srun -help for more information.

      OPTION COMMENT
      -n, --ntasks=ntasks total number of tasks to run
      --ntasks-per-node=n number of tasks to invoke on each node
      -help Get a list of available options
      Note: The information above applies to the MVAPICH2, Intel MPI and OpenMPI installations at OSC. 
      Caution: mpiexec or mpiexec.hydra is still supported with Intel MPI and OpenMPI, but it is not fully compatible in our Slurm environment. We recommand using srun in any circumstances.

      OpenMP

      The Intel, GNU and PGI compilers understand the OpenMP set of directives, which support multithreaded programming. For more information on building OpenMP codes on OSC systems, please visit the OpenMP documentation.

      An OpenMP program by default will use a number of threads equal to the number of CPUs requested in a Slurm batch job. To use a different number of threads, set the environment variable OMP_NUM_THREADS. For example,

      #!/bin/bash
      #SBATCH --ntasks=8
      
      # Run 8 threads
      ./omp_prog
      
      # Run 4 threads
      export OMP_NUM_THREADS=4
      ./omp_prog
      

      To run a OpenMP job on an exclusive node:

      #!/bin/bash
      #SBATCH --nodes=1
      #SBATCH --exclusive
      
      export OMP_NUM_THREADS=$SLURM_CPUS_ON_NODE
      ./omp_prog
      

      Interactive job only

      Please use -c, --cpus-per-task=X instead of -n, --ntasks=X to request an interactive job. Both result in an interactive job with X CPUs available but only the former option automatically assigns the correct number of threads to the OpenMP program. If  the option --ntasks is used only, the OpenMP program will use one thread or all threads will be bound to one CPU core. 

      Hybrid (MPI + OpenMP)

      An example of running a job for hybrid code:

      #!/bin/bash
      #SBATCH --nodes=2
      #SBATCH --constraint=48core
      
      # Run 4 MPI processes on each node and 12 OpenMP threads spawned from a MPI process
      export OMP_NUM_THREADS=12
      srun -n 8 -c 12 --ntasks-per-node=4 ./hybrid_prog
      

      To run a job across either 40-core or 48-core nodes exclusively:

      #!/bin/bash
      #SBATCH --nodes=2
      
      # Run 4 MPI processes on each node and the maximum available OpenMP threads spawned from a MPI process 
      export OMP_NUM_THREADS=$(($SLURM_CPUS_ON_NODE/4))
      srun -n 8 -c $OMP_NUM_THREADS --ntasks-per-node=4 ./hybrid_prog
      

      Tuning Parallel Program Performance: Process/Thread Placement

      To get the maximum performance, it is important to make sure that processes/threads are located as close as possible to their data, and as close as possible to each other if they need to work on the same piece of data, with given the arrangement of node, sockets, and cores, with different access to RAM and caches. 

      While cache and memory contention between threads/processes are an issue, it is best to use scatter distribution for code. 

      Processes and threads are placed differently depending on the computing resources you requste and the compiler and MPI implementation used to compile your code. For the former, see the above examples to learn how to run a job on exclusive nodes. For the latter, this section summarizes the default behavior and how to modify placement.

      OpenMP only

      For all three compilers (Intel, GNU, PGI), purely threaded codes do not bind to particular CPU cores by default. In other words, it is possible that multiple threads are bound to the same CPU core

      The following table describes how to modify the default placements for pure threaded code:

      DISTRIBUTION Compact Scatter/Cyclic
      DESCRIPTION Place threads as closely as possible on sockets Distribute threads as evenly as possible across sockets
      INTEL KMP_AFFINITY=compact KMP_AFFINITY=scatter
      GNU OMP_PLACES=sockets[1] OMP_PROC_BIND=spread/close
      PGI[2]

      MP_BIND=yes
      MP_BLIST="$(seq -s, 0 2 47),$(seq -s, 1 2 47)" 

      MP_BIND=yes
      1. Threads in the same socket might be bound to the same CPU core.
      2. PGI LLVM-backend (version 19.1 and later) does not support thread/processors affinity on NUMA architecture. To enable this feature, compile threaded code with --Mnollvm to use proprietary backend.

      MPI Only

      For MPI-only codes, MVAPICH2 first binds as many processes as possible on one socket, then allocates the remaining processes on the second socket so that consecutive tasks are near each other.  Intel MPI and OpenMPI alternately bind processes on socket 1, socket 2, socket 1, socket 2 etc, as cyclic distribution.

      For process distribution across nodes, all MPIs first bind as many processes as possible on one node, then allocates the remaining processes on the second node. 

      The following table describe how to modify the default placements on a single node for MPI-only code with the command srun:

      DISTRIBUTION
      (single node)
      Compact Scatter/Cyclic
      DESCRIPTION Place processs as closely as possible on sockets Distribute process as evenly as possible across sockets
      MVAPICH2[1] Default MV2_CPU_BINDING_POLICY=scatter
      INTEL MPI srun --cpu-bind="map_cpu:$(seq -s, 0 2 47),$(seq -s, 1 2 47)" Default
      OPENMPI srun --cpu-bind="map_cpu:$(seq -s, 0 2 47),$(seq -s, 1 2 47)" Default
      1. MV2_CPU_BINDING_POLICY will not work if MV2_ENABLE_AFFINITY=0 is set.

      To distribute processes evenly across nodes, please set SLURM_DISTRIBUTION=cyclic.

      Hybrid (MPI + OpenMP)

      For Hybrid codes, each MPI process is allocated  OMP_NUM_THREADS cores and the threads of each process are bound to those cores. All MPI processes (as well as the threads bound to the process) behave as we describe in the previous sections. It means the threads spawned from a MPI process might be bound to the same core. To change the default process/thread placmements, please refer to the tables above. 

      Summary

      The above tables list the most commonly used settings for process/thread placement. Some compilers and Intel libraries may have additional options for process and thread placement beyond those mentioned on this page. For more information on a specific compiler/library, check the more detailed documentation for that library.

      GPU Programming

      164 Nvidia V100 GPUs are available on Pitzer.  Please visit our GPU documentation.

      Reference

      Supercomputer: 

      Batch Limit Rules

      Pitzer includes two types of processors, Intel® Xeon® 'Skylake' processor and Intel® Xeon® 'Cascade Lake' processor. This document provides you information on how to request resources based on the requirements of # of cores, memory, etc despite the heterogeneous nature of the Pitzer cluster. Therefore, in some cases, your job can land on either type of processor. Please check guidance on requesting resources on pitzer for your job to obtain a certain type of processor on Pitzer.
      We use Slurm syntax for all the discussions on this page. Please check how to prepare slurm job script if your script is prepared in PBS syntax. 

      Memory limit

      A small portion of the total physical memory on each node is reserved for distributed processes.  The actual physical memory available to user jobs is tabulated below.

      Summary

      Node type default and max memory per core max memory per node
      Skylake 40 core - regular compute 4.449 GB 177.96 GB
      Cascade Lake 48 core - regular compute 3.708 GB 177.98 GB
      large memory 15.5 GB 744 GB
      huge memory 37.362 GB 2988.98 GB
      Skylake 40 core dual gpu 9.074 GB 363 GB
      Cascade 48 core dual gpu 7.562 GB 363 GB
      quad gpu (48 core) 15.5 GB

      744 GB

      A job may request more than the max memory per core, but the job will be allocated more cores to satisfy the memory request instead of just more memory.
      e.g. The following slurm directives will actually grant this job 3 cores, with 10 GB of memory
      (since 2 cores * 4.5 GB = 9 GB doesn't satisfy the memory request).
      #SBATCH --ntask=2
       #SBATCH --mem=10g

      It is recommended to let the default memory apply unless more control over memory is needed.
      Note that if an entire node is requested, then the job is automatically granted the entire node's main memory. On the other hand, if a partial node is requested, then memory is granted based on the default memory per core.

      See a more detailed explanation below.

      Regular Compute Node

      • For the regular 'Skylake' processor-based node, it has 40 cores/node. The physical memory equates to 4.8 GB/core or 192 GB/node; while the usable memory equates to 4,556 MB/core or 182,240 MB/node (177.96 GB/node).
      • For the regular 'Cascade Lake' processor-based node, it has 48 cores/node. The physical memory equates to 4.0 GB/core or 192 GB/node; while the usable memory equates to 3,797 MB/core or 182,256 MB/node (177.98 GB/node). 

      Jobs requesting no more than 1 node

      If your job requests less than a full node, it may be scheduled on a node with other running jobs. In this case, your job is entitled to a memory allocation proportional to the number of cores requested (4,556 MB/core or 3,797 MB/core depending on which type of node your job lands on).  For example, without any memory request ( --mem=XX ):

      • A job that requests --ntasks=1 and lands on a 'Skylake' node will be assigned one core and should use no more than 4556 MB of RAM; a job that requests --ntasks=1 and lands on a 'Cascade Lake' node will be assigned one core and should use no more than 3797 MB of RAM
      • A job that requests --ntasks=3 and lands on a 'Skylake' node will be assigned 3 cores and should use no more than 3*4556 MB of RAM; a job that requests --ntasks=3 and lands on a 'Cascade Lake' node will be assigned 3 cores and should use no more than 3*3797 MB of RAM
      • A job that requests  --ntasks=40 and lands on a 'Skylake' node will be assigned the whole node (40 cores) with 178 GB of RAM; a job that requests --ntasks=40 and lands on a 'Cascade Lake' node will be assigned 40 cores (partial node) and should use no more than 40* 3797 MB of RAM
      • A job that requests  --exclusive and lands on a 'Skylake' node will be assigned the whole node (40 cores) with 178 GB of RAM; a job that requests --exclusive and lands on a 'Cascade Lake' node will be assigned the whole node (48 cores) with 178 GB of RAM
      • A job that requests  --exclusive --constraint=40core will land on a 'Skylake' node and will be assigned the whole node (40 cores) with 178 GB of RAM. 

        For example, with a memory request:
      • A job that requests --ntasks=1 --mem=16000MB  and lands on 'Skylake' node will be assigned 4 cores and have access to 16000 MB of RAM, and charged for 4 cores worth of usage; a job that requests --ntasks=1 --mem=16000MB  and lands on 'Cascade Lake' node will be assigned 5 cores and have access to 16000 MB of RAM, and charged for 5 cores worth of usage
      • A job that requests --ntasks=8 --mem=16000MB  and lands on 'Skylake' node will be assigned 8 cores but have access to only 16000 MB of RAM , and charged for 8 cores worth of usage; a job that requests --ntasks=8 --mem=16000MB  and lands on 'Cascade Lake' node will be assigned 8 cores but have access to only 16000 MB of RAM , and charged for 8 cores worth of usage

      Jobs requesting more than 1 node

      A multi-node job ( --nodes > 1 ) will be assigned the entire nodes and charged for the entire nodes regardless of --ntasks or --ntasks-per-node request. For example, a job that requests --nodes=10 --ntasks-per-node=1  and lands on 'Skylake' node will be charged for 10 whole nodes (40 cores/node*10 nodes, which is 400 cores worth of usage); a job that requests --nodes=10 --ntasks-per-node=1  and lands on 'Cascade Lake' node will be charged for 10 whole nodes (48 cores/node*10 nodes, which is 480 cores worth of usage). We usually suggest not including --ntasks-per-node and using --ntasks if needed.   

      Large Memory Node

      On Pitzer, it has 48 cores per node. The physical memory equates to 16.0 GB/core or 768 GB/node; while the usable memory equates to 15,872 MB/core or 761,856 MB/node (744 GB/node).

      For any job that requests no less than 363 GB/node but less than 744 GB/node, the job will be scheduled on the large memory node.To request no more than a full large memory node, you need to specify the memory request between 363 GB and 744 GB, i.e.,  363GB <= mem <744GB. --mem is the total memory per node allocated to the job. You can request a partial large memory node, so consider your request more carefully when you plan to use a large memory node, and specify the memory based on what you will use. 

      Huge Memory Node

      On Pitzer, it has 80 cores per node. The physical memory equates to 37.5 GB/core or 3 TB/node; while the usable memory equates to 38,259 MB/core or  3,060,720 MB/node (2988.98 GB/node).

      To request no more than a full huge memory node, you have two options:

      • The first is to specify the memory request between 744 GB and 2988 GB, i.e., 744GB <= mem <=2988GB).
      • The other option is to use the combination of --ntasks-per-node and --partition, like --ntasks-per-node=4 --partition=hugemem . When no memory is specified for the huge memory node, your job is entitled to a memory allocation proportional to the number of cores requested (38,259 MB/core). Note, --ntasks-per-node should be no less than 20 and no more than 80 

      Summary

      In summary, for serial jobs, we will allocate the resources considering both the # of cores and the memory request. For parallel jobs (nodes>1), we will allocate the entire nodes with the whole memory regardless of other requests. Check requesting resources on pitzer for information about the usable memory of different types of nodes on Pitzer. To manage and monitor your memory usage, please refer to Out-of-Memory (OOM) or Excessive Memory Usage.

      GPU Jobs

      Dual GPU Node

      • For the dual GPU node with 'Skylake' processor, it has 40 cores/node. The physical memory equates to 9.6 GB/core or 384 GB/node; while the usable memory equates to 9292 MB/core or 363 GB/node. Each node has 2 NVIDIA Volta V100 w/ 16 GB GPU memory. 
      • For the dual GPU node with 'Cascade Lake' processor, it has 48 cores/node. The physical memory equates to 8.0 GB/core or 384 GB/node; while the usable memory equates to 7744 MB/core or 363 GB/node. Each node has 2 NVIDIA Volta V100 w/32GB GPU memory.  

      For serial jobs, we will allow node sharing on GPU nodes so a job may request either 1 or 2 GPUs (--ntasks=XX --gpus-per-node=1 or --ntasks=XX --gpus-per-node=2)

      For parallel jobs (nodes>1), we will not allow node sharing. A job may request 1 or 2 GPUs ( gpus-per-node=1 or gpus-per-node=2 ) but both GPUs will be allocated to the job.

      Quad GPU Node

      For quad GPU node, it has 48 cores/node. The physical memory equates to 16.0 GB/core or 768 GB/node; while the usable memory equates to 15,872 MB/core or 744 GB/node.. Each node has 4 NVIDIA Volta V100s w/32 GB GPU memory and NVLink.

      For serial jobs, we will allow node sharing on GPU nodes, so a job can land on a quad GPU node if it requests 3-4 GPUs per node (--ntasks=XX --gpus-per-node=3 or --ntasks=XX --gpus-per-node=4), or requests quad GPU node explicitly with using --gpus-per-node=v100-quad:4, or gets backfilled with requesting 1-2 GPUs per node with less than 4 hours long. 

      For parallel jobs (nodes>1), only up to 2 quad GPU nodes can be requested in a single job. We will not allow node sharing and all GPUs will be allocated to the job.

      Partition time and job size limits

      Here is the walltime and node limits per job for different queues/partitions available on Pitzer:

      NAME

      MAX TIME LIMIT
      (dd-hh:mm:ss)

      MIN JOB SIZE

      MAX JOB SIZE

      NOTES

      serial

      7-00:00:00

      1 core

      1 node

       

      longserial 14-00:00:00

      1 core

      1 node

      • Restricted access
      • Only 40 core nodes are available

      parallel

      96:00:00

      2 nodes

      40 nodes 

       

      hugemem

      7-00:00:00

      1 core

      1 node

      • There are only 4 pitzer huge memory nodes

      largemem

      7-00:00:00

      1 core

      1 node

      • There are 12 large memory nodes

      gpuserial

      7-00:00:00

      1 core

      1 node

      • Includes dual and quad GPU nodes

      gpuparallel

      96:00:00

      2 nodes

      10 nodes

      • Includes dual and quad GPU nodes
      • Only up to 2 quad GPU nodes can be requested in a single job

      debug

      1:00:00

      1 core

      2 nodes

       

      gpudebug

      1:00:00

      1 core

      2 nodes

       

      Total available nodes shown for pitzer may fluctuate depending on the amount of currently operational nodes and nodes reserved for specific projects.

      To specify a partition for a job, either add the flag --partition=<partition-name> to the sbatch command at submission time or add this line to the job script:
      #SBATCH --paritition=<partition-name>

      To access one of the restricted queues, please contact OSC Help. Generally, access will only be granted to these queues if the performance of the job cannot be improved, and job size cannot be reduced by splitting or checkpointing the job.

        Job/Core Limits

        Max Running Job Limit  Max Core/Processor Limit
          For all types GPU jobs Regular debug jobs GPU debug jobs For all types
        Individual User 384 140 4 4 3240
        Project/Group 576 140 n/a n/a 3240

         

        An individual user can have up to the max concurrently running jobs and/or up to the max processors/cores in use. However, among all the users in a particular group/project, they can have up to the max concurrently running jobs and/or up to the max processors/cores in use.

        A user may have no more than 1000 jobs submitted to both the parallel and serial job queue separately.
        Supercomputer: 
        Service: 

        Citation

        For more information about citations of OSC, visit https://www.osc.edu/citation.

        To cite Pitzer, please use the following Archival Resource Key:

        ark:/19495/hpc56htp

        Please adjust this citation to fit the citation style guidelines required.

        Ohio Supercomputer Center. 2018. Pitzer Supercomputer. Columbus, OH: Ohio Supercomputer Center. http://osc.edu/ark:19495/hpc56htp

        Here is the citation in BibTeX format:

        @misc{Pitzer2018,
        ark = {ark:/19495/hpc56htp},
        url = {http://osc.edu/ark:/19495/hpc56htp},
        year  = {2018},
        author = {Ohio Supercomputer Center},
        title = {Pitzer Supercomputer}
        }
        

        And in EndNote format:

        %0 Generic
        %T Pitzer Supercomputer
        %A Ohio Supercomputer Center
        %R ark:/19495/hpc56htp
        %U http://osc.edu/ark:/19495/hpc56htp
        %D 2018
        

        Here is an .ris file to better suit your needs. Please change the import option to .ris.

        Documentation Attachment: 
        Supercomputer: 

        Pitzer SSH key fingerprints

        These are the public key fingerprints for Pitzer:
        pitzer: ssh_host_rsa_key.pub = 8c:8a:1f:67:a0:e8:77:d5:4e:3b:79:5e:e8:43:49:0e 
        pitzer: ssh_host_ed25519_key.pub = 6d:19:73:8e:b4:61:09:a9:e6:0f:e5:0d:e5:cb:59:0b 
        pitzer: ssh_host_ecdsa_key.pub = 6f:c7:d0:f9:08:78:97:b8:23:2e:0d:e2:63:e7:ac:93 


        These are the SHA256 hashes:​
        pitzer: ssh_host_rsa_key.pub = SHA256:oWBf+YmIzwIp+DsyuvB4loGrpi2ecow9fnZKNZgEVHc 
        pitzer: ssh_host_ed25519_key.pub = SHA256:zUgn1K3+FK+25JtG6oFI9hVZjVxty1xEqw/K7DEwZdc 
        pitzer: ssh_host_ecdsa_key.pub = SHA256:8XAn/GbQ0nbGONUmlNQJenMuY5r3x7ynjnzLt+k+W1M 

        Supercomputer: 

        Migrating jobs from other clusters

        This page includes a summary of differences to keep in mind when migrating jobs from other clusters to Pitzer. 

        Guidance for Oakley Users

        The Oakley cluster is removed from service on December 18, 2018. 

        Guidance for Owens Users

        Hardware Specifications

          pitzer (PER NODE) owens (PER NODE)
        Regular compute node

        40 cores and 192GB of RAM

        48 cores and 192GB of RAM

        28 cores and 125GB of RAM
        Huge memory node

        48 cores and 768GB of RAM

        (12 nodes in this class)

        80 cores and 3.0 TB of RAM

        (4 nodes in this class)

        48 cores and 1.5TB of RAM

        (16 nodes in this class)

        File Systems

        Pitzer accesses the same OSC mass storage environment as our other clusters. Therefore, users have the same home directory, project space, and scratch space as on the Owens cluster.

        Software Environment

        Pitzer uses the same module system as Owens.

        Use   module load <package to add a software package to your environment. Use   module list   to see what modules are currently loaded and  module avail   to see the modules that are available to load. To search for modules that may not be visible due to dependencies or conflicts, use   module spider 

        You can keep up to on the software packages that have been made available on Pitzer by viewing the Software by System page and selecting the Pitzer system.

        Programming Environment

        Like Owens, Pitzer supports three compilers: Intel, PGI, and gnu. The default is Intel. To switch to a different compiler, use  module swap intel gnu  or  module swap intel pgi

        Pitzer also use the MVAPICH2 implementation of the Message Passing Interface (MPI), optimized for the high-speed Infiniband interconnect and support the Advanced Vector Extensions (AVX2) instruction set.

        See the Pitzer Programming Environment page for details. 

        Accounting

        Below is a comparison of job limits between Pitzer and Owens:

          PItzer Owens
        Per User Up to 256 concurrently running jobs and/or up to 3240 processors/cores in use  Up to 256 concurrently running jobs and/or up to 3080 processors/cores in use
        Per group Up to 384 concurrently running jobs and/or up to 3240 processors/cores in use Up to 384 concurrently running jobs and/or up to 4620 processors/cores in use

        Please see Queues and Reservations for Pitzer and Batch Limit Rules for more details.

        Guidance for Ruby Users

        The Ruby cluster is removed from service on October 29, 2020. 
        Supercomputer: 
        Service: 

        Guidance on Requesting Resources on Pitzer

        In late 2018, OSC installed 260 Intel® Xeon® 'Skylake' processor-based nodes as the original Pitzer cluster. In September 2020, OSC installed additional 398 Intel® Xeon® 'Cascade Lake' processor-based nodes as part of a Pitzer Expansion cluster. This expansion makes Pitzer a heterogeneous cluster, which means that the jobs may land on different types of CPU and behaves differently if the user submits the same job script repeatedly to Pitzer but does not request the resources properly. This document provides you some general guidance on how to request resources on Pitzer due to this heterogeneous nature. 

        Step 1: Identify your job type

          Nodes the job may be allocated on # of cores per node Usable Memory GPU
        Jobs requesting standard compute node(s) Dual Intel Xeon 6148s Skylake @2.4GHz 40 

        178 GB memory/node

        4556 MB memory/core

        N/A
        Dual Intel Xeon 8268s Cascade Lakes @2.9GHz 48

        178 GB memory/node

        3797 MB memory/core

        N/A
        Jobs requesting dual GPU node(s)

        Dual Intel Xeon 6148s Skylake @2.4GHz

        40

        363 GB memory/node

        9292 MB memory/core

        2 NVIDIA Volta V100 w/ 16GB GPU memory
        Dual Intel Xeon 8268s Cascade Lakes @2.9GHz 48

        363 GB memory/node

        7744 MB memory/core

        2 NVIDIA Volta V100 w/32GB GPU memory
        Jobs requesting quad GPU node(s) Dual Intel Xeon 8260s Cascade Lakes @2.4GHz 48

        744 GB memory/node

        15872 MB memory/core

        4 NVIDIA Volta V100s w/32GB GPU memory and NVLink
        Jobs requesting large memory node(s) Dual Intel Xeon 8268s Cascade Lakes @2.9GHz 48

        744 GB memory/node

        15872 MB memory/core

        N/A
        Jobs requesting huge memory node(s) Quad Processor Intel Xeon 6148 Skylakes @2.4GHz 80

        2989 GB memory/node

        38259 MB memory/core

        N/A

        According to this table,

        • If your job requests standard compute node(s) or dual GPU node(s), it can potentially land on different types of nodes and may result in different job performance. Please follow the steps below to determine whether you would like to restrain your job to a certain type of node(s). 
        • If your job requests quad GPU node(s), large memory node(s), or huge memory node(s), please check pitzer batch limit rules on how to request these special types of resources properly. 

        Step 2: Perform test

        This step is to submit your jobs requesting the same resources to different types of nodes on Pitzer. For your job script is prepared with either PBS syntax or Slurm syntax:

        Request 40 or 48 core nodes

        #SBATCH --constraint=40core
        #SBATCH --constraint=48core

        Request 16gb, 32gb gpu

        #SBATCH --constraint=v100
        #SBATCH --constraint=v100-32g --partition=gpuserial-48core

         

        Once the script is ready, submit your jobs to Pitzer and wait till the jobs are completed. 

        Step 3: Compare the results

        Once the jobs are completed, you can compare the job performances in terms of core-hours, gpu-hours, walltime, etc. to determine how your job is sensitive to the type of the nodes. If you would like to restrain your job to land on a certain type of nodes based on the testing, you can add  #SBATCH --constraint=. The disadvantage of this is that you may have a longer queue wait time on the system. If you would like to have your jobs scheduled as fast as possible and do not care which type of nodes your job will land on, do not include the constraint in the job request. 

        Supercomputer: 

        GPU Computing

        OSC offers GPU computing on all its systems.  While GPUs can provide a significant boost in performance for some applications, the computing model is very different from the CPU.  This page will discuss some of the ways you can use GPU computing at OSC.

        Accessing GPU Resources

        To request nodes with a GPU add the --gpus-per-node=x attribute to the directive in your batch script, for example, on Owens:

        #SBATCH --gpus-per-node=1

        In most cases you'll need to load the cuda module (module load cuda) to make the necessary Nvidia libraries available.

        Setting the GPU compute mode (optional)

        The GPUs on Owens and Pitzer can be set to different compute modes as listed here.  They can be set by adding the following to the GPU specification when using the srun command. By default it is set to shared.

        srun --gpu_cmode=exclusive
        

        or

        srun --gpu_cmode=shared
        

        The compute mode shared is the default on GPU nodes if a compute mode is not specified. With this compute mode, mulitple CUDA processes on the same GPU device are allowed.

        Using GPU-enabled Applications

        We have several supported applications that can use GPUs.  This includes

        Please see the software pages for each application.  They have different levels of support for multi-node jobs, cpu/gpu work sharing, and environment set-up.

        Libraries with GPU Support

        There are a few libraries that provide GPU implementations of commonly used routines. While they mostly hide the details of using a GPU there are still some GPU specifics you'll need to be aware of, e.g. device initialization, threading, and memory allocation.  These are available at OSC:

        MAGMA

        MAGMA is an implementation of BLAS and LAPACK with multi-core (SMP) and GPU support. There are some differences in the API of standard BLAS and LAPACK.

        cuBLAS and cuSPARSE

        cuBLAS is a highly optimized BLAS from NVIDIA. There are a few versions of this library, from very GPU-specific to nearly transparent. cuSPARSE is a BLAS-like library for sparse matrices.

        The MAGMA library is built on cuBLAS.

        cuFFT

        cuFFT is NVIDIA's Fourier transform library with an API similar to FFTW.

        cuDNN

        cuDNN is NVIDIA's Deep Neural Network machine learning library. Many ML applications are built on cuDNN.

        Direct GPU Programming

        GPUs present a different programming model from CPUs so there is a significant time investment in going this route.

        OpenACC

        OpenACC is a directives-based model similar to OpenMP. Currently this is only supported by the Portland Group C/C++ and Fortran compilers.

        OpenCL

        OpenCL is a set of libraries and C/C++ compiler extensions supporting GPUs (NVIDIA and AMD) and other hardware accelerators. The CUDA module provides an OpenCL library.

        CUDA

        CUDA is the standard NVIDIA development environment. In this model explicit GPU code is written in the CUDA C/C++ dialect, compiled with the CUDA compiler NVCC, and linked with a native driver program.

        About GPU Hardware

        Our GPUs span several generations with different capabilites and ease-of-use. Many of the differences won't be visible when using applications or libraries, but some features and applications may not be supported on the older models.

        Owens P100

        The P100 "Pascal" is a NVIDIA GPU with a compute capability of 6.0. The 6.0 capability includes unified shared CPU/GPU memory -- the GPU now has its own virtual memory capability and can map CPU memory into its address space.

        Each P100 has 16GB of on-board memory and there is one GPU per GPU node.

        Pitzer V100

        The NVIDIA V100 "Volta" GPU, with a compute capability of 7.0, offers several advanced features, one of which is its Tensor Cores. These Tensor Cores empower the GPU to perform mixed-precision matrix operations, significantly enhancing its efficiency for deep learning workloads and expediting tasks such as AI model training and inference.

        The V100 deployed in 2018 comes equipped with 16GB of memory, whereas the V100 deployed in 2020 features 32GB of memory. There are two GPUs per GPU node, 

        Additionally, there are four large memory nodes equipped with quad NVIDIA Volta V100s with 32GB of GPU memory and NVLink.

        Ascend A100

        The NVIDIA A100 "Ampere" GPU, with a compute capability of 8.0, empowers advanced deep learning and scientific computing tasks. For instance, it accelerates and enhances the training of deep neural networks, enabling the training of intricate models like GPT-4 in significantly less time when compared to earlier GPU architectures.

        The A100 comes equipped with 80GB of memory. here are 4 GPUs with NVLink, offering 320GB of usable GPU memory per node.

        Examples

        There are example jobs and code at GitHub

        Tutorials & Training

        Training is an important part of our services. We are working to expand our portfolio; we currently provide the following:

        • Training classes. OSC provides training classes, at our facility, on-site and remotely.
        • HOWTOs. Step-by-step guides to accomplish certain tasks on our systems.
        • Tutorials. Online content designed for self-paced learning.

        Other good sources for information:

        • Knowledge Base.  Useful information that does not fit our existing documentation.
        • FAQ.  List of commonly asked questions.

        Batch Processing at OSC

        OSC has recently switched schedulers from PBS to Slurm.
        Please see the slurm migration pages for information about how to convert commands.

        Batch processing

        Efficiently using computing resources at OSC requires using the batch processing system. Batch processing refers to submitting requests to the system to use computing resources.

        The only access to significant resources on the HPC machines is through the batch process. This guide will provide an overview of OSC's computing environment, and provide some instruction for how to use the batch system to accomplish your computing goals.

        The menu at the right provides links to all the pages in the guide, or you can use the navigation links at the bottom of the page to step through the guide one page at a time. If you need additional assistance, please do not hesitate to contact OSC Help.

        Batch System Concepts

        The only access to significant resources on the HPC machines is through the batch process.

        Why use a batch system?

        Access to the OSC clusters is through a system of login nodes. These nodes are reserved solely for the purpose of managing your files and submitting jobs to the batch system. Acceptable activities include editing/creating files, uploading and downloading files of moderate size, and managing your batch jobs. You may also compile and link small-to-moderate size programs on the login nodes.

        CPU time and memory usage are severely limited on the login nodes. There are typically many users on the login nodes at one time. Extensive calculations would degrade the responsiveness of those nodes.

        If a process is started on the login nodes that is using too much cpu or memory, then it may be killed without warning.

        The batch system allows users to submit jobs requesting the resources (nodes, processors, memory, GPUs) that they need. The jobs are queued and then run as resources become available. The scheduling policies in place on the system are an attempt to balance the desire for short queue waits against the need for efficient system utilization.

        Interactive vs. batch

        When you type commands in a login shell and see a response displayed, you are working interactively. To run a batch job, you put the commands into a text file instead of typing them at the prompt. You submit this file to the batch system, which will run it as soon as resources become available. The output you would normally see on your display goes into a log file. You can check the status of your job interactively and/or receive emails when it begins and ends execution.

        Terminology

        The batch system used at OSC is SLURM. A central manager slurmctld, monitors resources and work. You’ll need to understand the terms cluster, node,  and processor (core) in order to request resources for your job. See HPC basics if you need this background information.

        The words “parallel” and “serial” as used by SLURM can be a little misleading. From the point of view of the batch system a serial job is one that uses just one node, regardless of how many processors it uses on that node. Similarly, a parallel job is one that uses more than one node. More standard terminology considers a job to be parallel if it involves multiple processes.

        Batch processing overview

        Here is a very brief overview of how to use the batch system.

        Choose a cluster

        Before you start preparing a job script you should decide which cluster you want your job to run on, Owens or Pitzer. This decision will probably be based on the resources available on each system. Remember which cluster you’re using because the batch systems are independent.

        Prepare a job script

        Your job script is a text file that includes SLURM directives as well as the commands you want executed. The directives tell the batch system what resources you need, among other things. The commands can be anything you would type at the login prompt. You can prepare the script using any editor.

        Submit the job

        You submit your job to the batch system using the sbatch command, with the name of the script file as the argument. The sbatch command responds with the job ID that was given to your job, typically a 6- or 7-digit number.

        Wait for the job to run

        Your job may wait in the queue for minutes or days before it runs, depending on system load and the resources requested. It may then run for minutes or days. You can monitor your job’s progress or just wait for an email telling you it has finished.

        Retrieve your output

        The log file (screen output) from your job will be in the directory you submitted the job from by default. Any other output files will be wherever your script put them.

        Supercomputer: 

        Batch Execution Environment

        Shell and initialization

        Your batch script executes in a shell on a compute node. The environment is identical to what you get when you connect to a login node except that you have access to all the resources requested by your job. The shell that Slurm uses is determined by the first line of the job script (it is by default #!/bin/bash). The appropriate “dot-files” ( .login , .profile , .cshrc ) will be executed, the same as when you log in. (For information on overriding the default shell, see the Job Scripts section.)

        The job begins in the directory that it was submitted from. You can use the cd command to change to a different directory. The environment variable $SLURM_SUBMIT_DIR makes it easy to return to the directory from which you submitted the job:

        cd $SLURM_SUBMIT_DIR
        

        Modules

        There are dozens of software packages available on OSC’s systems, many of them with multiple versions. You control what software is available in your environment by loading the module for the software you need. Each module sets certain environment variables required by the software.

        If you are running software that was installed by OSC, you should check the software documentation page to find out what modules to load.

        Several modules are automatically loaded for you when you login or start a batch script. These default modules include

        • modules required by the batch system
        • the Intel compiler suite
        • an MPI package compatible with the default compiler (for parallel computing)

        The module command has a number of subcommands. For more details, type module help.

        Certain modules are incompatible with each other and should never be loaded at the same time. Examples are different versions of the same software or multiple installations of a library built with different compilers.

        Note to those who build or install their own software: Be sure to load the same modules when you run your software that you had loaded when you built it, including the compiler module.

        Each module has both a name and a version number. When more than one version is available for the same name, one of them is designated as the default. For example, the following modules are available for the Intel compilers on Owens: (Note: The versions shown might be out of date but the concept is the same.)

        • intel/12.1.0 (default)
        • intel/12.1.4.319

        If you specify just the name, it refers to the default version or the currently loaded version, depending on the context. If you want a different version, you must give the entire string including the version information.

        You can have only one compiler module loaded at a time, either intel, pgi, or gnu. The intel module is loaded initially; to change to pgi or gnu, do a module swap (see example below).

        Some software libraries have multiple installations built for use with different compilers. The module system will load the one compatible with the compiler you have loaded. If you swap compilers, all the compiler-dependent modules will also be swapped.

        Special note to gnu compiler users: While the gnu compilers are always in your path, you should load the gnu compiler module to ensure you are linking to the correct library versions.

        To list the modules you have loaded:

        module list
        

        To see all modules that are compatible with your currently loaded modules:

        module avail
        

        To see all modules whose names start with fftw:

        module avail fftw
        

        To see all possible modules:

        module spider
        

        To see all possible modules whose names start with fftw:

        module spider fftw
        

        To load the fftw3 module that is compatible with your current compiler:

        module load fftw3
        

        To unload the fftw3 module:

        module unload fftw3
        

        To load the default version of the abaqus module (not compiler-dependent):

        module load abaqus
        

        To load a different version of the abaqus module:

        module load abaqus/6.8-4
        

        To unload whatever abaqus module you have loaded:

        module unload abaqus
        

        To unload all modules:

        module purge

        To reset to default starting modules:

        module reset

        To swap the intel compilers for the pgi compilers (unloads intel, loads pgi):

        module swap intel pgi
        

        To swap the default version of the intel compilers for a different version:

        module swap intel intel/12.1.4.319
        

        To display help information for the mkl module:

        module help mkl
        

        To display the commands run by the mkl module:

        module show mkl
        

        To use a locally installed module, first import the module directory:

        module use [/path/to/modulefiles]
        

        And then load the module:

        module load localmodule
        

        Slurm environment variables

        Your batch execution environment has all the environment variables that your login environment has plus several that are set by the batch system. This section gives examples for using some of them. For more information see man sbatch.

        Directories

        Several directories may be useful in your job.

        The absolute path of the directory your job was submitted from is $SLURM_SUBMIT_DIR.

        Each job has a temporary directory, $TMPDIR , on the local disk of each node assigned to it. Access to this directory is much faster than access to your home or project directory. The files in this directory are not visible from all the nodes in a parallel job; each node has its own directory. The batch system creates this directory when your job starts and deletes it when your job ends. To copy file input.dat to $TMPDIR on your job’s first node:

        cp input.dat $TMPDIR
        

        For parallel job, to copy file input.dat to $TMPDIR on all your job’s nodes:

        sbcast input.dat $TMPDIR/input.dat
        

        Each job also has a temporary directory, $PFSDIR , on the parallel scratch file system, if users add node attribute "pfsdir" in the batch request (--gres=pfsdir). This is a single directory shared by all the nodes a job is running on. Access is faster than access to your home or project directory but not as fast as $TMPDIR . The batch system creates this directory when your job starts and deletes it when your job ends. To copy the file output.dat from this directory to the directory you submitted your job from:

        cp $PFSDIR/output.dat $SLURM_SUBMIT_DIR
        

        The $HOME environment variable refers to your home directory. It is not set by the batch system but is useful in some job scripts. It is better to use $HOME than to hardcode the path to your home directory. To access a file in your home directory:

        cat $HOME/myfile
        

        Job information

        A list of the nodes and cores assigned to your job is obtained using srun hostname |sort -n

        For GPU jobs, a list of the GPUs assigned to your job is in the file $SLURM_GPUS_ON_NODE. To display this file:

        cat $SLURM_GPUS_ON_NODE
        

        If you use a job array, each job in the array gets its identifier within the array in the variable $SLURM_ARRAY_JOB_ID. To pass a file name parameterized by the array ID into your application:

        ./a.out input_$SLURM_ARRAY_JOB_ID.dat
        

        To display the numeric job identifier assigned by the batch system:

        echo $SLURM_JOB_ID
        

        To display the job name:

        echo $SLURM_JOB_NAME
        

        Use fast storage

        If your job does a lot of file-based input and output, your choice of file system can make a huge difference in the performance of the job.

        Shared file systems

        Your home directory is located on shared file systems, providing long-term storage that is accessible from all OSC systems. Shared file systems are relatively slow. They cannot handle heavy loads such as those generated by large parallel jobs or many simultaneous serial jobs. You should minimize the I/O your jobs do on the shared file systems. It is usually best to copy your input data to fast temporary storage, run your program there, and copy your results back to your home directory.

        Batch-managed directories

        Batch-managed directories are temporary directories that exist only for the duration of a job. They exist on two types of storage: disks local to the compute nodes and a parallel scratch file system.

        A big advantage of batch-managed directories is that the batch system deletes them when a job ends, preventing clutter on the disk.

        A disadvantage of batch-managed directories is that you can’t access them after your job ends. Be sure to include commands in your script to copy any files you need to long-term storage. To avoid losing your files if your job ends abnormally, for example by hitting its walltime limit, include a trap command in your script (Note:  trap  commands do not work in csh and tcsh shell batch scripts). The following example creates a subdirectory in $SLURM_SUBMIT_DIR and copies everything from $TMPDIR into it in case of abnormal termination.

        trap "cd $SLURM_SUBMIT_DIR;mkdir $SLURM_JOB_ID;cp -R $TMPDIR/* $SLURM_SUBMIT_DIR;exit" TERM
        

        If a node your job is running on crashes, the trap command may not be executed. It may be possible to recover your batch-managed directories in this case. Contact OSC Help for assistance. For other details on retrieving files from unexpectedly terminated jobs, see this FAQ.

        Local disk space

        The fastest storage is on a disk local to the node your job is running on, accessed through the environment variable $TMPDIR . The main drawback to local storage is that each node of a parallel job has its own directory and cannot access the files on other nodes. 

        Local disk space should be used only through the batch-managed directory created for your job. Please do not use /tmp directly because your files won’t be cleaned up properly.

        Parallel file system

        The parallel file system, including project directory and scratch directory, is faster than the shared file systems for large-scale I/O and can handle a much higher load. It is efficient for reading and writing data in large blocks and should not be used for I/O involving many small accesses.

        The scratch file system can be used through the batch-managed directory created for your job. The path for this directory is in the environment variable $PFSDIR . You should use it when your files must be accessible by all the nodes in your job and also when your files are too large for the local disk.

        You may also create a directory for yourself in scratch file system and use it the way you would use any other directory. This directory will not be backed up; files are subject to deletion after some number of months.

        Note: You should not copy your executable files to $PFSDIR. They should be run from your home directories or from $TMPDIR.

        Supercomputer: 

        Job Scripts

        Known Issue

        The usage of combing the --ntasks and --ntask-per-node options in a job script can cause some unexpected resource allocations and placement due to a bug in Slurm 23. OSC users are strongly encouraged to review their job scripts for jobs that request both --ntasks and --ntasks-per-node. Jobs should request either --ntasks or --ntasks-per-node, not both.

        A job script is a text file containing job setup information for the batch system followed by commands to be executed. It can be created using any text editor and may be given any name. Some people like to name their scripts something like myscript.job or myscript.sh, but myscript works just as well.

        A job script is simply a shell script. It consists of Slurm directives, comments, and executable statements. The # character indicates a comment, although lines beginning with #SBATCH are interpreted as Slurm directives. Blank lines can be included for readability.

        Contents

        1. SBATCH header lines
        2. Resource limits
        3. Executable section
        4. Considerations for parallel jobs
        5. Batch script examples


        SBATCH header lines

        A job script must start with a shabang #!  (#!/bin/bash is commonly used but you can choose others) following by several lines starting with #SBATCH. These are Slurm SBATCH directives or header lines. They provide job setup information used by Slurm, including resource requests, email options, and more. The header lines may appear in any order, but they must precede any executable lines in your script. Alternatively, you may provide these directives (without the #SBATCH notation) on the command line with the sbatch command.

        $ sbatch --jobname=test_job myscript.sh
        


        Resource limits

        Options used to request resources, including nodes, memory, time, and software flags, as described below.

        Walltime

        The walltime limit is the maximum time your job will be allowed to run, given in seconds or hours:minutes:seconds. This is elapsed time. If your job exceeds the requested time, the batch system will kill it. If your job ends early, you will be charged only for the time used.

        The default value for walltime is 1:00:00 (one hour).

        To request 20 hours of wall clock time:

        #SBATCH --time=20:00:00
        

        It is important to carefully estimate the time your job will take. An underestimate will lead to your job being killed. A large overestimate may prevent your job from being backfilled or fitting into an empty time slot.

        Tasks, cores (cpu), nodes and GPUs

        Resource limits specify not just the number of nodes but also the properties of those nodes. The properties differ between clusters but may include the number of cores per node, the number of GPUs per node (gpus), and the type of node.

        SLURM uses the term task, which can be thought of as number of processes started.

        Making sure that the number of tasks versus cores per task is important when using an mpi launcher such as srun.

        Serial job
        A serial job in this context refers to a job requesting resources that are included in a single node.
        e.g. A node contians 40 cores, and a job requests 20 cores. Another job requests 40 cores of the 40 core node.
        These are serial jobs.

        To request one CPU core (sequential job), do not add any SLURM directives. The default is one node, one core, and one task.

        To request 6 CPU cores on one node, in a single process:

        #SBATCH --cpus-per-task=6
        
        Parallel job

        To request 4 nodes and run a task on each which uses 40 cores:

        #SBATCH --nodes=4
        #SBATCH --ntasks-per-node=1
        #SBATCH --cpus-per-task=40
        

        To request 4 nodes with 10 tasks per node (the default is 1 core per task, unless using --cpus-per-task to set manually):

        #SBATCH --nodes=4 --ntasks-per-node=10
        Under our current scheduling policy a parallel job (which uses more than one node) is always given full nodes. You can easily use just part of each node even if the entire nodes are allocated (see the section srun in parallel jobs).

        Computing nodes on Pitzer cluster have 40 or 48 cores per node. The job can be constrained on 40-core (or 48-core) nodes only by using  --constraint:

        #SBATCH --constraint=40core
        GPU job

        To request 2 nodes with 2 GPUs (2-GPU nodes are only available on Pitzer)

        #SBATCH --nodes=2
        #SBATCH --gpus-per-node=2
        

        To request one node with use of 6 cores and 1 GPU:

        #SBATCH --cpus-per-task=6
        #SBATCH --gpus-per-node=1
        

        Memory

        The memory limit is the total amount of memory needed across all nodes. There is no need to specify a memory limit unless you need a large-memory node or your memory requirements are disproportionate to the number of cores you are requesting. For parallel jobs you must multiply the memory needed per node by the number of nodes to get the correct limit; you should usually request whole nodes and omit the memory limit.

        Default units are bytes, but values are usually expressed in megabytes (mem=4000MB) or gigabytes (mem=4GB).

        To request 4GB memory (see note below):

        #SBATCH --mem=4gb
        

        or

        #SBATCH --mem=4000mb
        

        To request 24GB memory:

        #SBATCH --mem=24000mb
        

        Note: The amount of memory available per node is slightly less than the nominal amount. If you want to request a fraction of the memory on a node, we recommend you give the amount in MB, not GB; 24000MB is less than 24GB. (Powers of 2 vs. powers of 10 -- ask a computer science major.)

        Software licenses

        If you are using a software package with a limited number of licenses, you should include the license requirement in your script. See the OSC documentation for the specific software package for details.

        Example requesting five abaqus licenses:

        #SBATCH --licenses=abaqus@osc:5
        

        Job name

        You can optionally give your job a meaningful name. The default is the name of the batch script, or just "sbatch" if the script is read on sbatch's standard input. The job name is used as part of the name of the job log files; it also appears in lists of queued and running jobs. The name may be up to 15 characters in length, no spaces are allowed, and the first character must be alphabetic.

        Example:

        #SBATCH --job-name=my_first_job
        

        Mail options

        You may choose to receive email when your job begins, when it ends, and/or when it fails. The email will be sent to the address we have on record for you. You should use only one --mail-type=<type> directive and include all the options you want.

        To receive an email when your job begins, ends or fails:

        #SBATCH --mail-type=BEGIN,END,FAIL
        

        To receive an email for all types:

        #SBATCH --mail-type=ALL
        

        The default email recipient is the submitting user, but you can include other users or email addresses:

        #SBATCH --mail-user=osu1234,osu4321,username@osu.edu
        

        Job log files

        By default, Slurm directs both standard output and standard error to one log file. For job 123456, the log file will be named slurm-123456.out. You can specify name for the log file.

        #SBATCH --output=myjob.out.%j

         where the %j is replaced by the job ID.

        Identify Project

        Job scripts are required to specify a project account.

        Get a list of current projects by using the OSCfinger command and looking in the SLURM accounts section:

        OSCfinger userex
        Login: userex                                     Name: User Example
        Directory: /users/PAS1234/userex (CREATED)        Shell: /bin/bash
        E-mail: user-ex@osc.edu
        Contact Type: REGULAR
        Primary Group: pas1234
        Groups: pas1234,pas4321
        Institution: Ohio Supercomputer Center
        Password Changed: Dec 11 2020 21:05               Password Expires: Jan 12 2021 01:05 AM
        Login Disabled: FALSE                             Password Expired: FALSE
        SLURM Enabled: TRUE
        SLURM Clusters: owens,pitzer
        SLURM Accounts: pas1234,pas4321 <<===== Look at me !!
        SLURM Default Account: pas1234
        Current Logins:
        

        To specify an account use:

        #SBATCH --account=PAS4321
        

        For more details on errors you may see when submitting a job, see messages from sbatch.


        Executable section

        The executable section of your script comes after the header lines. The content of this section depends entirely on what you want your job to do. We mention just two commands that you might find useful in some circumstances. They should be placed at the top of the executable section if you use them.

        Command logging

        The set -x command (set echo in csh) is useful for debugging your script. It causes each command in the batch file to be printed to the log file as it is executed, with a + in front of it. Without this command, only the actual display output appears in the log file.

        To echo commands in bash or ksh:

        set -x
        

        To echo commands in tcsh or csh:

        set echo on
        

        Signal handling

        Signals to gracefully and then immediately kill a job will be sent for various circumstances, for example if it runs out of wall time or is killed due to out-of-memory. In both cases, the job may stop before all the commands in the job script can be executed.

        The sbatch flag --signal can be used to specify commands to be ran when these signals are received by the job.

        Below is an example:

        #!/bin/bash
        #SBATCH --job-name=minimal_trap
        #SBATCH --time=2:00
        #SBATCH --nodes=1 --ntasks-per-node=1
        #SBATCH --output=%x.%A.log
        #SBATCH --signal=B:USR1@60
        
        function my_handler() {
          echo "Catching signal"
          touch $SLURM_SUBMIT_DIR/job_${SLURM_JOB_ID}_caught_signal
          cd $SLURM_SUBMIT_DIR
          mkdir $SLURM_JOB_ID
          cp -R $TMPDIR/* $SLURM_JOB_ID
          exit
        }
        
        trap my_handler USR1
        trap my_handler TERM
        
        my_process &
        wait

        It is typically used to copy output files from a temporary directory to a home or project directory. The following example creates a directory in $SLURM_SUBMIT_DIR and copies everything from $TMPDIR into it. This executes only  if the job terminates abnormally. In some cases, even with signal handling, the job still may not be able to execute the handler.

        The & wait is needed after starting the process so that user defined signal can be received by the process. See signal handling in slurm section of slurm migration issues for details.

        For other details on retrieving files from unexpectedly terminated jobs see this FAQ.


        Considerations for parallel jobs

        Each processor on our system is fast, but the real power of supercomputing comes from putting multiple processors to work on a task. This section addresses issues related to multithreading and parallel processing as they affect your batch script. For a more general discussion of parallel computing see another document.

        Multithreading involves a single process, or program, that uses multiple threads to take advantage of multiple cores on a single node. The most common approach to multithreading on HPC systems is OpenMP. The threads of a process share a single memory space.

        The more general form of parallel processing involves multiple processes, usually copies of the same program, which may run on a single node or on multiple nodes. These processes have separate memory spaces. When they need to communicate or share data, these processes typically use the Message-Passing Interface (MPI).

        A program may use multiple levels of parallelism, employing MPI to communicate between nodes and OpenMP to utilize multiple processors on each node.

        For more details on building and running MPI/OpenMP software, see the programing environment pages for Pitzer cluster and Owens cluster.

        While many executables will run on any of our clusters, MPI programs must be built on the system they will run on. Most scientific programs will run faster if they are built on the system where they’re going to run.

        Script issues in parallel jobs

        In a parallel job your script executes on just the first node assigned to the job, so it’s important to understand how to make your job execute properly in a parallel environment. These notes apply to jobs running on multiple nodes.

        You can think of the commands (executable lines) in your script as falling into four categories.

        • Commands that affect only the shell environment. These include such things as cd, module, and export (or setenv). You don’t have to worry about these. The commands are executed on just the first node, but the batch system takes care of transferring the environment to the other nodes.
        • Commands that you want to have execute on only one node. These might include date or echo. (Do you really want to see the date printed 20 times in a 20-node job?) They might also include cp if your parallel program expects files to be available only on the first node. You don’t have to do anything special for these commands.
        • Commands that have parallel execution, including knowledge of the batch system, built in. These include sbcast (parallel file copy) and some application software installed by OSC. You should consult the software documentation for correct parallel usage of application software.
        • Any other command or program that you want to have execute in parallel must be run using srun. Otherwise, it will run on only one node, while the other nodes assigned to the job will remain idle. See examples below.

        srun

        The srun command runs a parallel job on cluster managed by Slurm. It is highly recommended to use srun while you run a parallel job with MPI libraries installed at OSC, including MVAPICH2, Intel MPI and OpenMPI.

        The srun command has the form:

        srun [srun-options] progname [prog-args]
        

        where srun-options is a list of options to srun, progname is the program you want to run, and prog-args is a list of arguments to the program. Note that if the program is not in your path or not in your current working directory, you must specify the path as part of the name. 

        By default, srun runs as many copies of progname as there are tasks assigned to the job. For example, if your job requested --ntasks=8, the following command would run 8 a.out processes (with one core per task by default):

        srun a.out
        

        The example above can be modified to pass arguments to a.out. The following example shows two arguments:

        srun a.out abc.dat 123
        

        If the program is multithreaded, or if it uses a lot of memory, it may be desirable to run less processes per node. You can specify --ntasks or --ntasks-per-node to do this. By modifying the above example with --nodes=4, the following example would run 8 copies of a.out, two on each node:

        srun --ntasks-per-node=2 --cpus-per-task=20 a.out abc.dat 123
        # start 2 tasks on each node, and each task is allocated 20 cores
        

        If this is a single-node job, you can skip --ntasks-per-node.

        System commands can also be run with srun. The following commands create a directory named data in the $TMPDIR directory on each node:

        cd $TMPDIR
        srun -n $SLURM_JOB_NUM_NODES --ntasks-per-node=1 mkdir data
        

        sbcast and sgather

        If you use $TMPDIR in a parallel job, you probably want to copy files to or from all the nodes. The sbcast and sgather commands are used for this task. 

        To copy one file into the directory $TMPDIR on all nodes allocated to your job:

        sbcast myprog $TMPDIR/myprog

        To copy one file from the directory $TMPDIR on all nodes allocated to your job: 

        sgather -k $TMPDIR/mydata all_data

        where the option -k will keep the file on the node, and all_data is the name of the file to be created with an appendix of source node name, meaning that you will see files all_data.node1_name, all_data.node2_name and more in the current working directory.

        To recursively copy a directory from all nodes to the directory where the job is submitted:

        sgather -k -r $TMPDIR $SLURM_SUBMIT_DIR/mydata
        

        where mydata is the name of the directory to be created with an appendix of source node name. 

        You CANNOT use wildcard (*) as the name of the file or directory for sbcast and sgather.

        Environment variables for MPI

        If your program combines MPI and OpenMP (or another multithreading technique), you should disable processor affinity by setting the environment variable MV2_ENABLE_AFFINITY to 0 in your script. If you don’t disable affinity, all your threads will run on the same core, negating any benefit from multithreading.

        To set the environment variable in bash, include this line in your script:

        export MV2_ENABLE_AFFINITY=0
        

        To set the environment variable in csh, include this line in your script:

        setenv MV2_ENABLE_AFFINITY 0
        

        Environment variables for OpenMP

        The number of threads used by an OpenMP program is typically controlled by the environment variable $OMP_NUM_THREADS. If this variable isn't set, the number of threads defaults to the number of cores you requested per node, although it can be overridden by the program.

        If your job runs just one process per node and is the only job running on the node, the default behavior is what you want. Otherwise, you should set $OMP_NUM_THREADS to a value that ensures that the total number of threads for all your processes on the node does not exceed the ppn value your job requested.

        For example, to set the environment variable to a value of 40 in bash, include this line in your script:

        export OMP_NUM_THREADS=40
        

        For example, to set the environment variable to a value of 40 in csh, include this line in your script:

        setenv OMP_NUM_THREADS 40
        

        Note: Some programs ignore $OMP_NUM_THREADS and determine the number of threads programmatically.


        Batch script examples

        Simple sequential job

        The following is an example of a single-task sequential job that uses $TMPDIR as its working area. It assumes that the program mysci has already been built. The script copies its input file from the directory into $TMPDIR, runs the code in $TMPDIR, and copies the output files back to the original directory.

        #!/bin/bash
        #SBATCH --account=pas1234
        #SBATCH --job-name=myscience
        #SBATCH --time=40:00:00
        
        cp mysci.in $TMPDIR
        cd $TMPDIR    
        /usr/bin/time ./mysci > mysci.hist
        cp mysci.hist mysci.out $SLURM_SUBMIT_DIR
        

        Serial job with OpenMP

        The following example runs a multi-threaded program with 8 cores:

        #!/bin/bash
        #SBATCH --account=pas1234
        #SBATCH --job-name=my_job
        #SBATCH --time=1:00:00
        #SBATCH --ntasks=8
        
        cp a.out $TMPDIR
        cd $TMPDIR
        export OMP_NUM_THREADS=8
        ./a.out > my_results
        cp my_results $SLURM_SUBMIT_DIR
        

        Simple parallel job

        Here is an example of a parallel job that uses 4 nodes, running one process per core. To illustrate the module command, this example assumes a.out was built with the GNU compiler. The module swap command is necessary when running MPI programs built with a compiler other than Intel.

        #!/bin/bash
        #SBATCH --account=pas1234
        #SBATCH --job-name=my_job
        #SBATCH --time=10:00:00
        #SBATCH --nodes=4
        #SBATCH --ntasks-per-node=28
        
        module swap intel gnu
        sbcast a.out $TMPDIR/a.out
        cd $TMPDIR
        srun a.out
        sgather -k -r $TMPDIR $SLURM_SUBMIT_DIR/my_mpi_output
        
        Notice that --ntasks-per-node is set based on a compute node in the owens cluster with 28 cores.
        Make sure to refer to other cluster and node type core counts when adjusting this value. Cluster computing would be a good place to start.

        Parallel job with MPI and OpenMP

        This example is a hybrid (MPI + OpenMP) job. It runs one MPI process per node with X threads per process, where X must be less than or equal to physical cores per node (see the note below). The assumption here is that the code was written to support multilevel parallelism. The executable is named hybrid-program.

        #!/bin/bash
        #SBATCH --account=pas1234
        #SBATCH --job-name=my_job
        #SBATCH --time=20:00:00
        #SBATCH --nodes=4
        #SBATCH --ntasks-per-node=28
        
        export OMP_NUM_THREADS=14
        export MV2_CPU_BINDING_POLICY=hybrid
        sbcast hybrid-program $TMPDIR/hybrid-program
        cd $TMPDIR
        srun --ntasks-per-node=2 --cpus-per-task=14 hybrid-program
        sgather -k -r $TMPDIR $SLURM_SUBMIT_DIR/my_hybrid_output

        Note that computing nodes on Pitzer cluster have 40 or 48 cores per node and computing nodes on Owens cluster have 28 cores per node. If you want X to be all physical cores per node and to be independent of clusters, use the input environment variable SLURM_CPUS_ON_NODE:

        export OMP_NUM_THREADS=$SLURM_CPUS_ON_NODE
        

         

        Supercomputer: 
        Service: 

        Job Submission

        Job scripts are submitted to the batch system using the sbatch command.  Be sure to submit your job on the system you want your job to run on, or use the --cluster=<system> option to specify one.

        Standard batch job

        Most jobs on our system are submitted as scripts with no command-line options. If your script is in a file named myscript:

        sbatch myscript

        In response to this command you’ll see a line with your job ID:

        Submitted batch job 123456

        You’ll use this job ID (numeric part only) in monitoring your job. You can find it again using the squeue -u <username>

        When you submit a job, the script is copied by the batch system. Any changes you make subsequently to the script file will not affect the job. Your input files and executables, on the other hand, are not picked up until the job starts running.

        Interactive batch

        The batch system supports an interactive batch mode. This mode is useful for debugging parallel programs or running a GUI program that’s too large for the login node. The resource limits (memory, CPU) for an interactive batch job are the same as the standard batch limits.

        Interactive batch jobs are generally invoked without a script file.

        Custom sinteractive command

        OSC has developed a script to make starting an interactive session simpler.

        The sinteractive command takes simple options and starts an interactive batch session automatically.  However, its behavior can be counterintuitive with respect to numbers of tasks and CPUs.  In addition, jobs launched with sinteractive can show environmental differences compared to jobs launched via other means.  As an alternative, try, e.g.:

        salloc -A <proj-code> --time=500 

        Simple serial

        The example below demonstrates using sinteractive to start a serial interactive job:

        sinteractive -A <proj-code>

        The default if no resource options are specified is for a single core job to be submitted.

        Simple parallel (single node)

        To request a simple parallel job of 4 cores on a single node:

        sinteractive -A <proj-code> -c 4

        To setup for OpenMP executables then enter this command:

        export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

        Parallel (multiple nodes)

        To request 2 whole nodes on Pitzer with a total of 96 cores between both nodes:

        sinteractive -A <proj-code> -N 2 -n 96

        But note that the slurm variables SLURM_CPUS_PER_TASK, SLURM_NTASKS, and SLURM_TASKS_PER_NODE are all 1, so subsequent srun commands to launch parallel executables must explicitly specify the task and cpu numbers desired.  Unless one really needs to run in the debug queues it is in general simpler to start with an appropriate salloc command.

        Use sinteractive --help to view all the options available and their default values.

        Using salloc and srun

        An example of using salloc and srun:

        salloc --account=pas1234 --x11 --nodes=2 --ntasks-per-node=28 --time=1:00:00 
        

        The salloc command requests the resources. Job is interactive. The --x11 flag enables X11 forwarding, which is necessary with a GUI. You will need to have a X11 server running on your computer to use X11 forwarding, see the getting connected page. The remaining flags in this example are resource requests with the same meaning as the corresponding header lines in a batch file.

        After you enter this line, you’ll see something like the following:

        salloc: Pending job allocation 123456
        salloc: job 123456 queued and waiting for resources

        Your job will be queued just like any job. When the job runs, you’ll see the following line:

        salloc: job 123456 has been allocated resources
        salloc: Granted job allocation 123456
        salloc: Waiting for resource configuration
        salloc: Nodes o0001 are ready for job

        At this point, you have an interactive login shell on one of the compute nodes, which you can treat like any other login shell.

        It is important to remember that OSC systems are optimized for batch processing, not interactive computing. If the system load is high, your job may wait for hours in the queue, making interactive batch impractical. Requesting a walltime limit of one hour or less is recommended because your job can run on nodes reserved for debugging.

        Job arrays

        If you submit many similar jobs at the same time, you should consider using a job array. With a single sbatch command, you can submit multiple jobs that will use the same script. Each job has a unique identifier, $SLURM_ARRAY_TASK_ID, which can be used to parameterize its behavior.

        Individual jobs in a job array are scheduled independently, but some job management tasks can be performed on the entire array.

        To submit an array of jobs numbered from 1 to 100, all using the script sim.job:

        sbatch --array=1-100 sim.job

        The script would use the environment variable $SLURM_ARRAY_TASK_ID, possibly as an input argument to an application or as part of a file name.

        Job dependencies

        It is possible to set conditions on when a job can start. The most common of these is a dependency relationship between jobs.

        For example, to ensure that the job being submitted (with script sim.job) does not start until after job 123456 has finished:

        sbatch --dependency=afterany:123456 sim.job

        Job variables

        It is possible to provide a list of environment variables that are exported to the job. 

        For example, to pass the variable and its value to the job with the script sim.job, use the command:

        sbatch --export=var=value​ sim.job

        Many other options are available, some quite complicated; for more information, see the sbatch online manual by using the command:

        man sbatch
        Supercomputer: 
        Service: 

        Monitoring and Managing Your Job

        Several commands allow you to check job status, monitor execution, collect performance statistics or even delete your job, if necessary.

        Status of queued jobs

        There are many possible reasons for a long queue wait — read on to learn how to check job status and for more about how job scheduling works.

        squeue

        Use the squeue command to check the status of your jobs, including whether your job is queued or running and information about requested resources. If the job is running, you can view elapsed time and resources used.

        Here are some examples for user usr1234 and job 123456.

        By itself, squeue lists all jobs in the system.

        To list all the jobs belonging to a particular user:

        squeue -u usr1234
        

        To list the status of a particular job, in standard or alternate (more useful) format:

        squeue -j 123456

        To get more detail about a particular job:

        squeue -j 123456 -l

        You may also filter output by the state of a job. To view only running jobs use:

        squeue -u usr1234 -t RUNNING
        

        Other states can be seen in the JOB STATE CODES section of squeue man page using man squeue.

        Additionally, JOB REASON CODES may be retrieved using the  -l with the command man squeue. These codes describe the nodes allocated to running jobs or the reasons a job is pending, which may include:

        • Reason code "MaxCpuPerAccount": A user or group has reached the limit on the number of cores allowed. The rest of the user or group's jobs will be pending until the number of cores in use decreases.
        • Reason code "Dependency": Dependencies among jobs or conditions that must be met before a job can run have not yet been satisfied.

        You can place a hold on your own job using scontrol hold jobid. If you do not understand the state of your job, contact OSC Help for assistance.

        To list blocked jobs:

        squeue -u usr1234 -t PENDING

        The --start option estimates the start time for a pending job. Unfortunately, these estimates are not at all accurate except for the highest priority job in the queue.

        Why isn’t my job running?

        There are many reasons that your job may have to wait in the queue longer than you would like, including:

        • System load is high.
        • A downtime has been scheduled and jobs that cannot complete by the start of that downtime are not being started. Check the system notices posted on the OSC Events page or the message of the day, displayed when you log in.
        • You or your group are at the maximum processor count or running job count and your job is being held.
        • Your job is requesting specialized resources, such as GPU nodes or large memory nodes or certain software licenses, that are in high demand and not available.
        • Your job is requesting a lot of resources. It takes time for the resources to become available.
        • Your job is requesting incompatible or nonexistent resources and can never run.
        • Job is unnecessarily stuck in batch hold because of system problems (very rare).

        Priority, backfill and debug reservations

        Priority is a complicated function of many factors, including the processor count and walltime requested, the length of time the job has been waiting and more.

        During each scheduling iteration, the scheduler will identify the highest priority job that cannot currently be run and find a time in the future to reserve for it. Once that is done, the scheduler will then try to backfill as many lower priority jobs as it can without affecting the highest priority job's start time. This keeps the overall utilization of the system high while still allowing reasonable turnaround time for high priority jobs. Short jobs and jobs requesting few resources are the easiest to backfill.

        A small number of nodes are set aside during the day for jobs with a walltime limit of 1 hour or less, primarily for debugging purposes.

        Observing a running job

        You can monitor a running batch job as easily as you can monitor a program running interactively. Simply view the output file in read only mode to check the current output of the job.

        Node status

        You may check the status of a node while the job is running by visiting the OSC grafana page and using the "cluster metrics" report.

        Managing your jobs

        Deleting a job

        Situations may arise that call for deletion of a job from the SLURM queue, such as incorrect resource limits, missing or incorrect input files or commands or a program taking too long to run (infinite loop).

        The command to delete a batch job is scancel. It applies to both queued and running jobs.

        Example:

        scancel 123456
        

        If you cannot delete one of your jobs, it may be because of a hardware problem or system software crash. In this case you should contact OSC Help.

        Altering a queued job

        You can alter certain attributes of a job in the queue using the scontrol update command. Use this command to make a change without losing your place in the queue. Please note that you cannot make any alterations to the executable portion of the script, nor can you make any changes after the job starts running.

        The syntax is:

        scontrol update job=<jobid> <args>
        

        The optional arguments consist of one or more SLURM directives in the form of command-line options.

        For example, to change the walltime limit on job 123456 to 5 hours and have email sent when the job ends (only):

        scontrol update job=123456 timeLimit=5:00:00 mailType=End
        

        Placing a hold on a queued job

        If you want to prevent a job from running but leave it in the queue, you can place a hold on it using the scontrol hold command. The job will remain pending until you release it with the scontrol release command. A hold can be useful if you need to modify the input file for a job without losing your place in the queue.

        Examples:

        scontrol hold 123456
        scontrol release 123456
        

        Job statistics

        Include the following commands in your batch script as appropriate to collect job statistics or performance information.

        A simple way to view job information is to use this command at the end of the job:

        scontrol show job=$SLURM_JOB_ID

        XDMoD tool

        You can use the online interactive tool XDMoD to look at usage statistics for jobs. See XDMoD overview for more information.

        date

        The date command prints the current date and time. It can be informative to include it at the beginning and end of the executable portion of your script as a rough measure of time spent in the job.

        time

        The time utility is used to measure the performance of a single command. It can be used for serial or parallel processes. Add /usr/bin/time to the beginning of a command in the batch script:

        /usr/bin/time myprog arg1 arg2
        

        The result is provided in the following format:

        1. user time (CPU time spent running your program)
        2. system time (CPU time spent by your program in system calls)
        3. elapsed time (wallclock)
        4. percent CPU used
        5. memory, pagefault and swap statistics
        6. I/O statistics

        These results are appended to the job's error log file. Note: Use the full path “/usr/bin/time” to get all the information shown.

        Supercomputer: 

        Scheduling Policies and Limits

        The batch scheduler is configured with a number of scheduling policies to keep in mind. The policies attempt to balance the competing objectives of reasonable queue wait times and efficient system utilization. The details of these policies differ slightly on each system. Exceptions to the limits can be made under certain circumstances; contact oschelp@osc.edu for details.

        Hardware limits

        Each system differs in the number of processors (cores) and the amount of memory and disk they have per node. We commonly find jobs waiting in the queue that cannot be run on the system where they were submitted because their resource requests exceed the limits of the available hardware. Jobs never migrate between systems, so please pay attention to these limits.

        Notice in particular the large number of standard nodes and the small number of large-memory nodes. Your jobs are likely to wait in the queue much longer for a large-memory node than for a standard node. Users often inadvertently request slightly more memory than is available on a standard node and end up waiting for one of the scarce large-memory nodes, so check your requests carefully.

        See cluster computing for details on the number of nodes for each type.

        Walltime limits per job

        Serial jobs (that is, jobs which request only one node) can run for up to 168 hours, while parallel jobs may run for up to 96 hours.

        Users who can demonstrate a need for longer serial job time may request access to the longserial queue, which allows single-node jobs of up to 336 hours. Longserial access is not automatic. Factors that will be considered include how efficiently the jobs use OSC resources and whether they can be broken into smaller tasks that can be run separately.

        Limits per user and group

        These limits are applied separately on each system.

        An individual user can have up to 128 concurrently running jobs and/or up to 2040  processor cores in use on Pitzer. All the users in a particular group/project can among them have up to 192 concurrently running jobs and/or up to 2040 processor cores in use on Pitzer. Jobs submitted in excess of these limits are queued but blocked by the scheduler until other jobs exit and free up resources.

        A user may have no more than 1000 jobs submitted to both the parallel and serial job queue separately. Jobs submitted in excess of this limit will be rejected.

        Priority

        The priority of a job is influenced by a large number of factors, including the processor count requested, the length of time the job has been waiting, and how much other computing has been done by the user and their group over the last several days. However, having the highest priority does not necessarily mean that a job will run immediately, as there must also be enough processors and memory available to run it.

        GPU Jobs

        All GPU nodes are reserved for jobs that request gpus. Short non-GPU jobs are allowed to backfill on these nodes to allow for better utilization of cluster resources.

        Supercomputer: 

        SLURM Directives Summary

        SLURM directives may appear as header lines in a batch script or as options on the sbatch command line. They specify the resource requirements of your job and various other attributes. Many of the directives are discussed in more detail elsewhere in this document. The online manual page for sbatch (man sbatch) describes many of them.

        slurm options specified on the command line will take precedence over slurm options in a job script.

        SLURM header lines must come before any executable lines in your script. Their syntax is:

        #SBATCH [option]

        where option can be one of the options in the table below (there are others which can be found in the manual). For example, to request 4 nodes with 40 processors per node:

        #SBATCH --nodes=4
        #SBTACH --ntasks-per-node=40
        #SBATCH --constraint=40core
        

        The syntax for including an option on the command line is:

        sbatch [option]

        For example, the following line submits the script myscript.job but adds the --time nodes directive:

        sbatch --time=00:30:00 myscript.job
        Description and examples of sbatch options
        Option Description
        --time=dd-hh:mm:ss

        Requests the amount of time needed for the job.
        Default is one hour.

        --nodes=n Number of nodes to request. Default is one node.
        --ntasks=m
        or
        --ntasks-per-node=m

        Number of cores on a single node or number of tasks per requested node.
        Default is a single core.

        --gpus-per-node=g Number of gpus per node. Default is none.
        --mem=xgb Specify the (RAM) main memory required per node.
        --licenses=pkg@osc:N Request use of N licenses for package {software flag}@osc:N.
        --job-name=my_name Sets the job name, which appears in status listings and is used as the prefix in the job’s output and error log files. The job name must not contain spaces.
        --mail-type=START Sets when to send mail to users when the job starts. There are other mail_type options including: END, FAIL.
        --mail-user=<email> Email address(es) separated by commas to send notifications to based on the mail type.
        --x11 Enable x11 forwarding for use of graphical applications.
        --account=PEX1234 Use the specified for job resource charging.
        --cluster=pitzer Explicitly specify which cluster to submit the job to.
        --partition=p Request a specific partition for the resource allocation instead of let the batch system assign a default partition.
        --gres=pfsdir Request use of $PFSDIR. See scratch space for details.

        Slurm defaults

        It is also possible to create a file which tells slurm to automatically apply certain directives to jobs.

        To start, create file ~/.slurm/defaults

        One option is to have the file automatically use a certain project account for job submissions. Simply add the following line to ~/.slurm/defaults

        account=PEX1234

        The account can also be separated by cluster.

        owens:account=PEX1234
        pitzer:account=PEX4321
        

        Or even separated to only use the defaults with the sbatch command.

        sbatch:*:account=PEX1234

        Finally, many of the options available for the sbatch command can be set as a default. Here are some examples.

        # always request two cores
        ntasks-per-node=2
        # on pitzer only, request a 2 hour time limit
        pitzer:time=2:00:00
        
        The per-cluster defaults will only apply if one is logged into that cluster and submits there. Using the --cluster=pitzer option while on Owens will not use the defaults defined for Pitzer.
        Using default options may make the sinteractive command unusable and the interactive session requests from ondemand unusable as well.
        Please contact OSC Help if there are questions.

        Batch Environment Variable Summary

        The batch system provides several environment variables that you may want to use in your job script. This section is a summary of the most useful of these variables. Many of them are discussed in more detail elsewhere in this document. The ones beginning with SLURM_ are described in the online manual page for sbatch (man sbatch).

        Environment Variable Description
        $TMPDIR The absolute path and name of the temporary directory created for this job on the local file system of each node
        $PFSDIR The absolute path and name of the temporary directory created for this job on the parallel file system
        $SLURM_SUBMIT_DIR The absolute path of the directory from which the batch script was started
        $SLURM_GPUS_ON_NODE Number of GPUs allocated to the job on each node (works with --exclusive jobs).
        $SLURM_ARRAY_JOB_ID Unique identifier assigned to each member of a job array
        $SLURM_JOB_ID The job identifier assigned to the job by the batch system
        $SLURM_JOB_NAME The job name supplied by the user

         

        The following environment variables are often used in batch scripts but are not directly related to the batch system.

         

        Environment Variable Description Comments
        $OMP_NUM_THREADS The number of threads to be used in an OpenMP program See the discussion of OpenMP elsewhere in this document. Set in your script. Not all OpenMP programs use this value.
        $MV2_ENABLE_AFFINITY Thread affinity option for MVAPICH2. Set this variable to 0 in your script if your program uses both MPI and multithreading. Not needed with MPI-1.
        $HOME The absolute path of your home directory. Use this variable to avoid hard-coding your home directory path in your script.

         

        Batch-Related Command Summary

        This section summarizes two groups of batch-related commands: commands that are run on the login nodes to manage your jobs and commands that are run only inside a batch script. Only the most common options are described here.

        Many of these commands are discussed in more detail elsewhere in this document. All have online manual pages (example: man sbatch ) unless otherwise noted.

        In describing the usage of the commands we use square brackets [like this] to indicate optional arguments. The brackets are not part of the command.

        Important note: The batch systems on Pitzer, Ruby, and Owens are entirely separate. Be sure to submit your jobs on a login node for the system you want them to run on. All monitoring while the job is queued or running must be done on the same system also. Your job output, of course, will be visible from both systems.

        Commands for managing your jobs

        These commands are typically run from a login node to manage your batch jobs. The batch systems on Pitzer and Owens are completely separate, so the commands must be run on the system where the job is to be run.

        sbatch

        The sbatch command is used to submit a job to the batch system.

        Usage Desctiption Example
        sbatch [ options ] script Submit a script for a batch job. The options list is rarely used but can augment or override the directives in the header lines of the script.   sbatch sim.job
        sbatch -t array_request [ options ] jobid Submit an array of jobs sbatch -t 1-100 sim.job
        sinteractive [ options ] Submit an interactive batch job sinteractive -n 4


        squeue

        The squeue command is used to display the status of batch jobs.

        Usage Desctiption Example
        squeue Display all jobs currently in the batch system. squeue
        squeue -j jobid Display information about job jobid. The -j flag uses an alternate format. squeue -j 123456
        squeue -j jobid -l Display long status information about job jobid. squeue -j 123456 -l
        squeue -u username [-l] Display information about all the jobs belonging to user username. squeue -u usr1234

        scancel

        The scancel command may be used to delete a queued or running job.

        Usage Description Example
        scancel jobid Delete job jobid.

        scancel 123456

        scancel jobid Delete all jobs in job array jobid. scancel 123456
        qdel jobid[jobnumber] Delete jobnumber within job array jobid. scancel 123456_14

        slurm output file

        There is an output file which stores the stdout and stderr for a running job which can be viewed to check the running job output. It is by default located in the dir where the job was submitted and has the format slurm-<jobid>.out

        The output file can also be renamed and saved in any valid dir using the option --output=<filename pattern>

        Cannot currently pass environment variables into slurm job script and can only specify this when using sbatch command at job submission.
        e.g.
        sbatch --output=$HOME/test_slurm.out <job-script> works
        #SBATCH --output=$HOME/test_slurm.out does NOT work in job script
        See slurm migration issues for details.
        Do not delete/modify the output file that is generated while your job running. This could cause adverse affects on your running job.

        scontrol

        The scontrol command may be used to modify the attributes of a queued (not running) job. Not all attributes can be altered.

        Usage Description Example
        scontrol update jobid=<jobid> [ option ] Alter one or more attributes a queued job. The options you can modify are a subset of the directives that can be used when submitting a job.

        scontrol update jobid=123456 --ntasks=4

        This command can also be used inside a job like so:
        scontrol show job=$SLURM_JOB_ID

        scontrol hold/release

        The qhold command allows you to place a hold on a queued job. The job will be prevented from running until you release the hold with the qrls command.

        Usage Description Example
        scontrol hold jobid Place a user hold on job jobid scontrol hold 123456
        scontrol release jobid Release a user hold previously placed on job jobid scontrol release 123456

        scontrol show

        The scontrol show command can be used to provide details about a job that is running.

        scontrol show job=$SLURM_JOB_ID

        Usage Description Example
        scontrol show job=<jobid> Check the details of a running job. scontrol show job=123456

        estimating start time

        The squeue command can try to estimate when a queued job will start running. It is extremely unreliable, often making large errors in either direction.

        Usage Description Example
        squeue -j jobid \
        --Format=username,jobid,account,startTime
        Display estimate of start time.
        squeue -j 123456 \ 
        --Format=username,jobid,account,startTime

         

        Commands used only inside a batch job

        These commands can only be used inside a batch job.

        srun

        Generally used to start an mpi process during a job. Can use most of the options available also from the sbatch command.

        Usage Example
        srun <prog> srun --ntasks=4 a.out

        sbcast/sgather

        Tool for copying files to/from all nodes allocated in a job.

        Usage
        sbcast <src_file> <nodelocaldir>/<dest_file>
        sgather <src_file> <shareddir>/<dest_file>
         sgather -r <src_dir> <sharedir>/dest_dir>

        Note: sbcast does not have a recursive cast option, meaning you can't use sbcast -r to scatter multiple files in a directory. Instead, you may use a loop command similar to this:

        cd ${the directory that has the files}

        for FILE in * 
        do
            sbcast -p $FILE $TMPDIR/some_directory/$FILE
        done

        mpiexec

        Use the mpiexec command to run a parallel program or to run multiple processes simultaneously within a job. It is a replacement program for the script mpirun , which is part of the mpich package.
        The OSC version of mpiexec is customized to work with our batch environment. There are other mpiexec programs in existence, but it is imperative that you use the one provided with our system.

        Usage Description Example
        mpiexec progname [ args ] Run the executable program progname in parallel, with as many processes as there are processors (cores) assigned to the job (nodes*ppn).

        mpiexec myprog

        mpiexec yourprog abc.dat 123

        mpiexec - ppn 1 progname [ args ] Run only one process per node. mpiexec -ppn 1 myprog
        mpiexec - ppn num progname [ args ] Run the specified number of processes on each node. mpiexec -ppn 3 myprog
        mpiexec -tv [ options ] progname [ args ] Run the program with the TotalView parallel debugger.

        mpiexec -tv myprog

        mpiexec -n num progname [ args ]

        mpiexec -np num progname [ args ] Run only the specified number of processes. ( -n and -np are equivalent.) Does not spread processes out evenly across nodes. mpiexec -n 3 myprog
        The options above apply to the MVAPICH2 and IntelMPI installations at OSC. See the OpenMPI software page for mpiexec usage with OpenMPI.

        pbsdcp

        The pbsdcp command is a distributed copy command for the Slurm environment. It copies files to or from each node of the cluster assigned to your job. This is needed when copying files to directories which are not shared between nodes, such as $TMPDIR.

        Options are -r for recursive and -p to preserve modification times and modes.

        Usage Description Example
        pbsdcp [-s] [ options ] srcfiles  target “Scatter”. Copy one or more files from shared storage to the target directory on each node (local storage). The -s flag is optional.

        pbsdcp -s infile1 infile2 $TMPDIR

        pbsdcp model.* $TMPDIR

        pbsdcp -g [ options ] srcfiles  target “Gather”. Copy the source files from each node to the shared target directory. Wildcards must be enclosed in quotes. pbsdcp -g '$TMPDIR/outfile*' $PBS_O_WORKDIR

        Note: In gather mode, if files on different nodes have the same name, they will overwrite each other. In the -g example above, the file names may have the form outfile001 , outfile002 , etc., with each node producing a different set of files.

         

        License software flag usage information

         

        We have licensed applications such as ansys, abaqus, and Schrodinger. These applications have a license server with a limited number of licenses, and you need to check out the licenses when you use the software each time. One problem is that the job scheduler, Slurm, doesn't communicate with the license server. As a result, a job can be launched even there are not enough licenses available, and it fails due to insufficient licenses. 

        In order to prevent this happen, you need to add the software flag to your job script. The software flag will register your license requests to the Slurm license pool so that Slrum can prevent launching jobs without enough licenses available.

        The syntax for software flags is

        #SBATCH -L {software flag}@osc:N

        where N is the requesting number of the licenses. If you need more than one software flags, you can use

        #SBATCH -L {software flag1}@osc:N,{software flag2}@osc:M

        For example, if you need 1 ansys and 10 ansyspar license features, then you can use

        $SBATCH -L ansys@osc:1,ansyspar@osc:10

        For interactive jobs, you can use, for example,

        sinteractive -A {project account} -L ansys@osc:1

        When you use the OnDemand VDI, Desktop, or Schrodinger apps, you can put software flags on the "Licenses" field. For OnDemand Abaqus/CAE, COMSOL Multiphysics, and Stata, the software flags will be placed automatically. And, for OnDemand Ansys Workbench, please check on "Reserve ANSYS Parallel Licenses," if you need "ansyspar" license features. 

        We have the full list of software associated with software flags in the table below. For more information, please click the link on the software name.  

          Software flag Note
        abaqus

        abaqus(350), abaquscae(10)

         
        ansys ansys(50), ansyspar(900)  
        comsol comsolscript(3)  
        schrodinger epik(10), glide(20), ligprep(10), macromodel(10), qikprep(10)  
        starccm starccm(80), starccmpar(4,000)  
        stata stata(5)  
        usearch usearch(1)  
        ls-dyna, mpp-dyna lsdyna(1,000)  

        *The number within the parentheses refers to the total number of licenses for each software flag

        It is critical you follow our instructions because your incomplete actions can affect others' jobs as well. We are actively monitoring the software flag usages, and we will reach out to you if you miss our instructions. Failing to make corrections may result in temporary removal from the license server. We have a Grafana dashboard showing the license and software flag usages. There are software flag requests represented as "SLURM", and actual license usages as "License Server". 

        License usage checking tool

        If you want to make sure your license usage, you can use ~support/bin/myLicenseCheck.

          usage: ~support/bin/myLicenseCheck [-h,--help] SOFTWARE
        
            -h, --help      print help messages
            SOFTWARE        supported software: ansys, abaqus, comsol, schrodinger, and starccm.
          

        This tool will tell you how many licenses you are actually using from the license server and how many licenses you have requested to the Slurm. But, this won't tell you about each job. So, if you want to figure out for a specific job, please make sure that the job is the only running job while you use the tool. 

         

        For assistance

        Contact OSC Help for assistance if there are any questions.

         

        Messages from sbatch

        sbatch messages

        shell warning

        Submitting a job without specifying the proper shell will return a warning like below:

        sbatch: WARNING: Job script lacks first line beginning with #! shell. Injecting '#!/bin/bash' as first line of job script.

        Errors

        If an error is encountered, the job is rejected.

        Not specifying a project account

        It is required to specify an account for a job to run. Please use the --account=<project-code> option to do this.

        sbatch: error: ERROR: Job invalid: Must specify account for job
        sbatch: error: Job submit/allocate failed: Unspecified error

        Incorrrect resource configuration

        If one makes a request for a node that doesn't exist, the job is rejected.

        salloc: error: Job submit/allocate failed: Requested node configuration is not available

        An example is requesting a regaular compute node, while also requesting a larger amount of memory than a compute node has.

        Specify wrong account

        If a user tries to set the --account option with a project that they are not on, then the job is rejected.

        sbatch: error: Job submit/allocate failed: Invalid account or account/partition combination specified

        Using a restricted project in a slurm job

        If a user submits a job and uses a project that is restricted, the following message will be shown and the job will not be submitted:

        sbatch: error: AssocGrpSubmitJobsLimit
        sbatch: error: Batch job submission failed: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits)

        Leading whitespace in job name

        Leading whitespace is not supported in SLURM job names. Your job will be rejected with an error message if you submit a job with a space in the job name:

        sbatch: error: Invalid directive found in batch script: name

        You can fix this by removing leading whitespace in the job name.

        Script is empty or only contains whitespace

        An empty file is not permitted to be submitted (included whitespace only files).

        sbatch: error: Batch script is empty!

        or

        sbatch: error: Batch script contains only whitespace!

         

        Supercomputer: 
        Service: 

        Troubleshooting Batch Problems

        License problems

        If you get a license error when you try to run a third-party software application, it means either the licenses are all in use or you’re not on the access list for the license. Very rarely there could be a problem with the license server. You should read the software page for the application you’re trying to use and make sure you’ve complied with all the procedures and are correctly requesting the license. Contact OSC Help with any questions.

        My job is running slower than it should

        Here are a few of the reasons your job may be running slowly:

        • Your job has exceeded available physical memory and is swapping to disk. This is always a bad thing in an HPC environment as it can slow down your job dramatically. Either cut down on memory usage, request more memory, or spread a parallel job out over more nodes.
        • Your job isn’t using all the nodes and/or cores you intended it to use. This is usually a problem with your batch script.
        • Your job is spawning more threads than the number of cores you requested. Context switching involves enough overhead to slow your job.
        • You are doing too much I/O to the network file servers (home and project directories), or you are doing an excessive number of small I/O operations to the parallel file server. An I/O-bound program will suffer severe slowdowns with improperly configured I/O.
        • You didn’t optimize your program sufficiently.
        • You got unlucky and are being hurt by someone else’s misbehaving job. As much as we try to isolate jobs from each other, sometimes a job can cause system-level problems. If you have run your job before and know that it usually runs faster, OSC staff can check for problems.

        Someone deleted my job!

        If your job is misbehaving, it may be necessary for OSC staff to delete it. Common problems are using up all the virtual memory on a node or performing excessive I/O to a network file server. If this happens you will be contacted by OSC Help with an explanation of the problem and suggestions for fixing it. We appreciate your cooperation in this situation because, much as we try to prevent it, one user’s jobs can interfere with the operation of the system.

        Occasionally a problem not caused by your job will cause an unrecoverable situation and your job will have to be deleted. You will be contacted if this happens.

        Why can’t I delete my job?

        If you can’t delete your job, it usually means a node your job was running on has crashed and the job is no longer running. OSC staff will delete the job.

        My job is stuck.

        There are multiple reasons that your job may appear to be stuck. If a node that your job is running on crashes, your job may remain in the running job queue long after it should have finished. In this case you will be contacted by OSC and will probably have to resubmit your job.

        If you conclude that your job is stuck based on what you see in the slurm output file, it’s possible that the problem is an illusion. This comment applies primarily to code you develop yourself. If you print progress information, for example, “Input complete” and “Setup complete”, the output may be buffered for efficiency, meaning it’s not written to disk immediately, so it won’t show up. To have it written immediate, you’ll have to flush the buffer; most programming languages provide a way to do this.

        My job crashed. Can I recover my data?

        If your job failed due to a hardware failure or system problem, it may be possible to recover your data from $TMPDIR. If the failure was due to hitting the walltime limit, the data in $TMPDIR would have been deleted immediately. Contact OSC Help for more information.

        The trap command can be used in your script to save your data in case your job terminates abnormally.

        Contacting OSC Help

        If you are having a problem with the batch system on any of OSC's machines, you should send email to oschelp@osc.edu. Including the following information will assist HPC Client Services staff in diagnosing your problem quickly:

        1. Name
        2. OSC User ID (username)
        3. Name of the system you are using
        4. Job ID
        5. Job script
        6. Job output and/or error messages (preferably in context)

        Or use the support request page.

        batch email notifications

        Occasionally, jobs that experience problems may generate emails from staff or automated systems at the center with some information about the nature of the problem. This page provides additional information about the various emails sent, and steps that can be taken to address the problem.

        batch emails

        All emails from osc about jobs will come from slurm@osc.edu, oschelp@osc.edu, or an email address with the domain @osc.edu

        regular job emails

        These emails can be turned on/off using the appropriate slurm directives. Other email addresses can also be specified. See the mail options section of job scripts page.

        Email type Description
        job began/end Job began or ended. These are normal emails.
        job aborted Job has ended in an abnormal state.

        other emails

        There is no option to turn these emails off, as they require us to contact the user that submitted the job. We can work with you if they will be expected. Please contact OSC Help in this case.

        Email type Description
        Deleted by administrator

        OSC staff may delete running jobs if:

        • The job is using so much memory that it threatens to crash the node it is running on.
        • The job is using more resources than it requested and is interfering with other jobs running on the same node.
        • The job is causing excessive load on some part of the system, typically a network file server.
        • The job is still running at the start of a scheduled downtime.

        OSC staff may delete queued jobs if:

        • The job requests non-existent resources.
        • A job intended for one system that was submitted on another one.
        • The job can never run because it requests combinations of resources that are disallowed by policy.
        • The user’s credentials are blocked on the system the job was submitted on.
        Emails exceed expected volume Job emails may be delayed if too many are queued to be sent to a single email address. This is to prevent OSC from being blacklisted by the email server.
        failure due to hardware/software problem The node(s) or software that a job was using had a critical issue and the job failed.
        overuse of physical memory (RAM)

        The node that was in use crashed due to it being out of memory.

        See out-of-memory (OOM) or excessive memory usage page for more information.

        Job requeued A job may be requeued explicitly by a system administrator or after a node failure.
        GPFS unmount

        An issue with gpfs may have affected the job. This includes directories located in:

        • /fs/ess
        Filling up /tmp

        Job failed after exhausting the space in a node's local /tmp directory. 

        Please request either an entire node or use scratch. 

        For assistance

        Contact OSC Help for assistance if there are any questions.

         

        Slurm Migration

        Overview

        Slurm, which stands for Simple Linux Utility for Resource Management, is a widely used open-source HPC resource management and scheduling system that originated at Lawrence Livermore National Laboratory.

        It is decided that OSC will be implementing Slurm for job scheduling and resource management, to replace the Torque resource manager and Moab scheduling system that it currently uses, over the course of 2020.

        Phases of Slurm Migration

        It is expected that on Jan 1, 2021, both Pitzer and Owens clusters will be using Slurm. OSC will be switching to Slurm on Pitzer with the deployment of the new Pitzer hardware in September 2020. Owens migration to Slurm will occur later this fall.

        PBS Compatibility Layer

        During Slurm migration, OSC enables PBS compatibility layer provided by Slurm in order to make the transition as smooth as possible. Therefore, PBS batch scripts that used to work in the previous Torque/Moab environment mostly still work in Slurm. However, we encourage you to start to convert your PBS batch scripts to Slurm scripts because

        • PBS compatibility layer usually handles basic cases, and may not be able to handle some complicated cases 
        • Slurm has many features that are not available in Moab/Torque, and the layer will not provide access to those features
        • OSC may turn off the PBS compatibility layer in the future

        Please check the following pages on how to submit a Slurm job:

        Further Reading

        Supercomputer: 
        Service: 

        How to Prepare Slurm Job Scripts

        Known Issue

        The usage of combing the --ntasks and --ntask-per-node options in a job script can cause some unexpected resource allocations and placement due to a bug in Slurm 23. OSC users are strongly encouraged to review their job scripts for jobs that request both --ntasks and --ntasks-per-node. Jobs should request either --ntasks or --ntasks-per-node, not both.

        As the first step, you can submit your PBS batch script as you did before to see whether it works or not. If it does not work, you can either follow this page for step-by-step instructions, or read the tables below to convert your PBS script to Slurm script by yourself. Once the job script is prepared, you can refer to this page to submit and manage your jobs.

        Job Submission Options

        Use Torque/Moab Slurm Equivalent
        Script directive #PBS #SBATCH
        Job name -N <name> --job-name=<name>
        Project account -A <account> --account=<account>
        Queue or partition -q queuename --partition=queuename

        Wall time limit

        -l walltime=hh:mm:ss --time=hh:mm:ss
        Node count -l nodes=N --nodes=N
        Process count per node -l ppn=M --ntasks-per-node=M
        Memory limit -l mem=Xgb --mem=Xgb (it is MB by default)
        Request GPUs -l nodes=N:ppn=M:gpus=G --nodes=N --ntasks-per-node=M --gpus-per-node=G
        Request GPUs in default mode -l nodes=N:ppn=M:gpus=G:default

        --nodes=N --ntasks-per-node=M --gpus-per-node=G --gpu_cmode=shared

        Require pfsdir -l nodes=N:ppn=M:pfsdir --nodes=N --ntasks-per-node=M --gres=pfsdir
        Require 'vis'  -l nodes=N:ppn=M:gpus=G:vis --nodes=N --ntasks-per-node=M --gpus-per-node=G --gres=vis

        Require special property

        -l nodes=N:ppn=M:property --nodes=N --ntasks-per-node=M --constraint=property

        Job array

        -t <array indexes> --array=<indexes>

        Standard output file

        -o <file path> --output=<file path>/<file name> (path must exist, and you must specify the name of the file)

        Standard error file

        -e <file path> --error=<file path>/<file name> (path must exist, and you must specify the name of the file)

        Job dependency

        -W depend=after:jobID[:jobID...]

        -W depend=afterok:jobID[:jobID...]

        -W depend=afternotok:jobID[:jobID...]

        -W depend=afterany:jobID[:jobID...]

        --dependency=after:jobID[:jobID...]

        --dependency=afterok:jobID[:jobID...]

        --dependency=afternotok:jobID[:jobID...]

        --dependency=afterany:jobID[:jobID...]

        Request event notification -m <events>

        --mail-type=<events>

        Note: multiple mail-type requests may be specified in a comma-separated list:

        --mail-type=BEGIN,END,NONE,FAIL

        Email address -M <email address> --mail-user=<email address>
        Software flag -l software=pkg1+1%pkg2+4 --licenses=pkg1@osc:1,pkg2@osc:4
        Require reservation -l advres=rsvid --reservation=rsvid

        Job Environment Variables

        Info Torque/Moab Environment Variable Slurm Equivalent
        Job ID $PBS_JOBID $SLURM_JOB_ID
        Job name $PBS_JOBNAME $SLURM_JOB_NAME
        Queue name $PBS_QUEUE $SLURM_JOB_PARTITION
        Submit directory $PBS_O_WORKDIR $SLURM_SUBMIT_DIR
        Node file cat $PBS_NODEFILE srun hostname |sort -n
        Number of processes $PBS_NP $SLURM_NTASKS
        Number of nodes allocated $PBS_NUM_NODES $SLURM_JOB_NUM_NODES
        Number of processes per node $PBS_NUM_PPN $SLURM_TASKS_PER_NODE
        Walltime $PBS_WALLTIME $SLURM_TIME_LIMIT
        Job array ID $PBS_ARRAYID $SLURM_ARRAY_JOB_ID
        Job array index $PBS_ARRAY_INDEX $SLURM_ARRAY_TASK_ID

        Environment Variables Specific to OSC

        Environment variable Description
        $TMPDIR Path to a node-specific temporary directory (/tmp) for a given job
        $PFSDIR Path to the scratch storage; only present if --gres request includes pfsdir.
        $SLURM_GPUS_ON_NODE Number of GPUs allocated to the job on each node (works with --exclusive jobs)
        $SLURM_JOB_GRES The job's GRES request
        $SLURM_JOB_CONSTRAINT The job's constraint request
        $SLURM_TIME_LIMIT Job walltime in seconds

        Commands in a Batch Job

        Use Torque/Moab Environment Variable Slurm Equivalent
        Launch a parallel program inside a job mpiexec <args> srun <args>
        Scatter a file to node-local file systems pbsdcp <file> <nodelocaldir>

        sbcast <src_file> <nodelocaldir>/<dest_file>

        * Note: sbcast does not have a recursive cast option, meaning you can't use sbcast -r to scatter multiple files in a directory. Instead, you may use a loop command similar to this:

        cd ${the directory that has the files}

        for FILE in * 
        do
            sbcast -p $FILE $TMPDIR/some_directory/$FILE
        done
        Gather node-local files to a shared file system pbsdcp -g <file> <shareddir>

        sgather <src_file> <shareddir>/<dest_file>
         sgather -r <src_dir> <sharedir>/dest_dir>

        Supercomputer: 

        How to Submit, Monitor and Manage Jobs

        Submit Jobs

        Use Torque/Moab Command Slurm Equivalent
        Submit batch job qsub <jobscript> sbatch <jobscript>
        Submit interactive job qsub -I [options]

        sinteractive [options]

        salloc [options]

        Notice: If a node fails, then the running job will be automatically resubmitted to the queue and will only be charged for the resubmission time and not the failed time.
        One can use  --mail-type=ALL option in their script to receive notifications about their jobs. Please see the slurm sbatch man page for more information.
        Another option is to disable the resubmission using --no-requeue so that the job does get submitted on node failure.
        A final note is that if the job does not get requeued after a failure, then there will be a charged incurred for the time that the job ran before it failed.

        Interactive jobs

        Submitting interactive jobs is a bit different in Slurm. When the job is ready, one is logged into the login node they submitted the job from. From there, one can then login to one of the reserved nodes.

        You can use the custom tool sinteractive as:

        [xwang@pitzer-login04 ~]$ sinteractive
        salloc: Pending job allocation 14269
        salloc: job 14269 queued and waiting for resources
        salloc: job 14269 has been allocated resources
        salloc: Granted job allocation 14269
        salloc: Waiting for resource configuration
        salloc: Nodes p0591 are ready for job
        ...
        ...
        [xwang@p0593 ~] $
        # can now start executing commands interactively
        

        Or, you can use salloc as:

        [user@pitzer-login04 ~] $ salloc -t 00:05:00 --ntasks-per-node=3
        salloc: Pending job allocation 14209
        salloc: job 14209 queued and waiting for resources
        salloc: job 14209 has been allocated resources
        salloc: Granted job allocation 14209
        salloc: Waiting for resource configuration
        salloc: Nodes p0593 are ready for job
        
        # normal login display
        $ squeue
        JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
        14210 serial-48     bash     usee  R       0:06      1 p0593
        [user@pitzer-login04 ~]$ srun --jobid=14210 --pty /bin/bash
        # normal login display
        [user@p0593 ~] $
        # can now start executing commands interactively
        
        

        Manage Jobs

        Use Torque/Moab Command Slurm Equivalent
        Delete a job* qdel <jobid>  scancel <jobid>
        Hold a job qhold <jobid> scontrol hold <jobid>
        Release a job qrls <jobid>  scontrol release <jobid>

        * User is eligible to delete his own jobs. PI/project admin is eligible to delete jobs submitted to the project he is an admin on. 

        Monitor Jobs

        Use Torque/Moab Command Slurm Equivalent
        Job list summary qstat or showq squeue
        Detailed job information qstat -f <jobid> or checkjob <jobid> sstat -a <jobid> or scontrol show job <jobid>
        Job information by a user qstat -u <user> squeue -u <user>

        View job script

        (system admin only)

        js <jobid> jobscript <jobid>
        Show expected start time showstart <job ID>

        squeue --start --jobs=<jobid>

        Supercomputer: 

        Steps on How to Submit Jobs

        How to Submit Interactive jobs

        There are different ways to submit interactive jobs.

        Using qsub

        qsub command is patched locally to handle the interactive jobs. So mostly you can use the qsub command as before:

        [xwang@pitzer-login04 ~]$ qsub -I -l nodes=1 -A PZS0712
        salloc: Pending job allocation 15387
        salloc: job 15387 queued and waiting for resources
        salloc: job 15387 has been allocated resources
        salloc: Granted job allocation 15387
        salloc: Waiting for resource configuration
        salloc: Nodes p0601 are ready for job
        ...
        [xwang@p0601 ~]$ 
        # can now start executing commands interactively
        

        Using sinteractive

        You can use the custom tool sinteractive as:

        [xwang@pitzer-login04 ~]$ sinteractive
        salloc: Pending job allocation 14269
        salloc: job 14269 queued and waiting for resources
        salloc: job 14269 has been allocated resources
        salloc: Granted job allocation 14269
        salloc: Waiting for resource configuration
        salloc: Nodes p0591 are ready for job
        ...
        ...
        [xwang@p0593 ~] $
        # can now start executing commands interactively
        

        Using salloc

        It is a little complicated if you use salloc . Below is a simple example:

        [user@pitzer-login04 ~] $ salloc -t 00:30:00 --ntasks-per-node=3 srun --pty /bin/bash
        salloc: Pending job allocation 2337639
        salloc: job 2337639 queued and waiting for resources
        salloc: job 2337639 has been allocated resources
        salloc: Granted job allocation 2337639
        salloc: Waiting for resource configuration
        salloc: Nodes p0002 are ready for job
        
        # normal login display
        [user@p0002 ~]$
        # can now start executing commands interactively
        
        

        How to Submit Non-interactive jobs

        Submit PBS job Script

        Since we have the compatibility layer installed, your current PBS scripts may still work as they are, so you should test them and see if they submit and run successfully. Submit your PBS batch script as you did before to see whether it works or not. Below is a simple PBS job script pbs_job.txt that calls for a parallel run:

        #PBS -l walltime=1:00:00
        #PBS -l nodes=2:ppn=40
        #PBS -N hello
        #PBS -A PZS0712
        
        cd $PBS_O_WORKDIR
        module load intel
        mpicc -O2 hello.c -o hello
        mpiexec ./hello > hello_results
        

        Submit this script on Pitzer using the command qsub pbs_job.txt , and this job is scheduled successfully as shown below:

        [xwang@pitzer-login04 slurm]$ qsub pbs_job.txt 
        14177

        Check the Job

        You can use the jobscript command to check the job information:

        [xwang@pitzer-login04 slurm]$ jobscript 14177
        -------------------- BEGIN jobid=14177 --------------------
        #!/bin/bash
        #PBS -l walltime=1:00:00
        #PBS -l nodes=2:ppn=40
        #PBS -N hello
        #PBS -A PZS0712
        
        cd $PBS_O_WORKDIR
        module load intel
        mpicc -O2 hello.c -o hello
        mpiexec ./hello > hello_results
        
        -------------------- END jobid=14177 --------------------
        Please note that there is an extra line #!/bin/bash added at the beginning of the job script from the output. This line is added by Slurm's qsub compatibility script because Slurm job scripts must have #!<SHELL> as its first line.

        You will get this message explicitly if you submit the script using the command sbatch pbs_job.txt

        [xwang@pitzer-login04 slurm]$ sbatch pbs_job.txt 
        sbatch: WARNING: Job script lacks first line beginning with #! shell. Injecting '#!/bin/bash' as first line of job script.
        Submitted batch job 14180
        

        Alternative Way: Convert PBS Script to Slurm Script

        An alternative way is that we convert the PBS job script (pbs_job.txt) to Slurm script (slurm_job.txt) before submitting the job. The table below shows the comparisons between the two scripts (see this page for more information on the job submission options):

        Explanations Torque Slurm
        Line that specifies the shell No need
        #!/bin/bash
        Resource specification

         

        #PBS -l walltime=1:00:00
        #PBS -l nodes=2:ppn=40
        #PBS -N hello
        #PBS -A PZS0712
        #SBATCH --time=1:00:00
        #SBATCH --nodes=2 --ntasks-per-node=40
        #SBATCH --job-name=hello
        #SBATCH --account=PZS0712
        
        Variables, paths, and modules
        cd $PBS_O_WORKDIR
        module load intel
        cd $SLURM_SUBMIT_DIR 
        module load intel
        Launch and run application
        mpicc -O2 hello.c -o hello
        mpiexec ./hello > hello_results
        mpicc -O2 hello.c -o hello
        srun ./hello > hello_results
        In this example, the line cd $SLURM_SUBMIT_DIR can be omitted in the Slurm script because your Slurm job always starts in your submission directory, which is different from Torque/Moab environment where your job always starts in your home directory.

        Once the script is ready, you submit the script using the command sbatch slurm_job.txt

        [xwang@pitzer-login04 slurm]$ sbatch slurm_job.txt 
        Submitted batch job 14215
        Supercomputer: 

        Slurm Migration Issues

        This page documents the known issues for migrating jobs from Torque to Slurm.

        $PBS_NODEFILE and $SLURM_JOB_NODELIST

        Please be aware that $PBS_NODEFILE is a file while $SLURM_JOB_NODELIST is a string variable. 

        The analog on Slurm to cat $PBS_NODEFILE is srun hostname | sort -n 

        Environment variables are not evaluated in job script directives

        Environment variables do not work in a slurm directive inside a job script.

        The job script job.txt including  #SBATCH --output=$HOME/jobtest.out won't work in Slurm. Please use the following instead:

        sbatch --output=$HOME/jobtest.out job.txt 

        Using mpiexec with Intel MPI

        Intel MPI (all versions through 2019.x) is configured to support PMI and Hydra process managers. It is recommended to use srun as the MPI program launcher. This is a possible symptom of using  mpiexec/mpirun:

        srun: error: PMK_KVS_Barrier duplicate request from task 0

        as well as:

        MPI startup(): Warning: I_MPI_PMI_LIBRARY will be ignored since the hydra process manager was found

        If you prefer using mpiexec/mpirun with SLURM, please add the following code to the batch script before running any MPI executable:

        unset I_MPI_PMI_LIBRARY 
        export I_MPI_JOB_RESPECT_PROCESS_PLACEMENT=0   # the option -ppn only works if you set this before

        Executables with a certain MPI library using SLURM PMI2 interface

        e.g.

        Stopping mpi4py python processes during an interactive job session only from a login node:

        $ salloc -t 15:00 --ntasks-per-node=4
        salloc: Pending job allocation 20822
        salloc: job 20822 queued and waiting for resources
        salloc: job 20822 has been allocated resources
        salloc: Granted job allocation 20822
        salloc: Waiting for resource configuration
        salloc: Nodes p0511 are ready for job
        # don't login to one of the allocated nodes, stay on the login node
        $ module load python/3.7-2019.10
        $ source activate testing
        (testing) $ srun --quit-on-interrupt python mpi4py-test.py
        # enter <ctrl-c>
        ^Csrun: sending Ctrl-C to job 20822.5
        Hello World (from process 0)
        process 0 is sleeping...
        Hello World (from process 2)
        process 2 is sleeping...
        Hello World (from process 3)
        process 3 is sleeping...
        Hello World (from process 1)
        process 1 is sleeping...
        Traceback (most recent call last):
        File "mpi4py-test.py", line 16, in <module>
        time.sleep(15)
        KeyboardInterrupt
        Traceback (most recent call last):
        File "mpi4py-test.py", line 16, in <module>
        time.sleep(15)
        KeyboardInterrupt
        Traceback (most recent call last):
        File "mpi4py-test.py", line 16, in <module>
        time.sleep(15)
        KeyboardInterrupt
        Traceback (most recent call last):
        File "mpi4py-test.py", line 16, in <module>
        time.sleep(15)
        KeyboardInterrupt
        srun: Job step aborted: Waiting up to 32 seconds for job step to finish.
        slurmstepd: error: *** STEP 20822.5 ON p0511 CANCELLED AT 2020-09-04T10:13:44 ***
        # still in the job and able to restart the processes
        (testing)
        

        pbsdcp with Slurm

        pbsdcp with gather option sometimes does not work correctly. It is suggested to use sbcast for scatter option and sgather  for gather option instead of pbsdcp. Please be aware that there is no wildcard (*) option for sbcast / sgather . And there is no recursive option for sbcast.In addition, the destination file/directory must exist. 

        Here are some simple examples:

        sbcast <src_file> <nodelocaldir>/<dest_file>
        sgather <src_file> <shareddir>/<dest_file>
        sgather -r --keep <src_dir> <sharedir>/dest_dir>

        Signal handling in slurm

        The below script needs to use a wait command for the user-defined signal USR1 to be received by the process.

        The sleep process is backgrounded using & wait so that the bash shell can receive signals and execute the trap commands instead of ignoring the signals while the sleep process is running.

        #!/bin/bash
        #SBATCH --job-name=minimal_trap
        #SBATCH --time=2:00
        #SBATCH --nodes=1 --ntasks-per-node=1
        #SBATCH --output=%x.%A.log
        #SBATCH --signal=B:USR1@60
        
        function my_handler() {
          echo "Catching signal"
          touch $SLURM_SUBMIT_DIR/job_${SLURM_JOB_ID}_caught_signal
          exit
        }
        
        trap my_handler USR1
        trap my_handler TERM
        
        sleep 3600 &
        wait
        

        reference: https://bugs.schedmd.com/show_bug.cgi?id=9715

        'mail' does not work; use 'sendmail'

        The 'mail' does not work in a batch job; use 'sendmail' instead as:

        sendmail user@example.com <<EOF
        subject: Output path from $SLURM_JOB_ID
        from: user@example.com
        ...
        EOF

        srun' with no arguments is to allocate a single task when using 'sinteractive'

        srun with no arguments is to allocate a single task when using sinteractive to request an interactive job, even you request more than one task. Please pass the needed arguments to srun:

        [xwang@owens-login04 ~]$ sinteractive -n 2 -A PZS0712
        ...
        [xwang@o0019 ~]$ srun hostname
        o0019.ten.osc.edu
        [xwang@o0019 ~]$ srun -n 2 hostname
        o0019.ten.osc.edu
        o0019.ten.osc.edu
        

        Be careful not to overwrite a Slurm batch output file for a running job

        Unlike a PBS batch output file, which lived in a user-non-writeable directory while the job was running, a Slurm batch output file resides under the user's home directory while the job is running.  File operations, such as editing and copying, are permitted.  Please be careful to avoid such operations while the job is running.  In particular, this batch script idiom is no longer correct (e.g., for the default job output file of name $SLURM_SUBMIT_DIR/slurm-jobid.out):

        cd $SLURM_SUBMIT_DIR
        cp -r * $TMPDIR
        cd $TMPDIR
        ...
        cp *.out* $SLURM_SUBMIT_DIR 

        Please submit any issue using the webform below:

         

         
        1 Start 2 Complete

        Please report the problem here when you use Slurm

        CAPTCHA
        This question is for testing whether you are a human visitor and to prevent automated spam submissions.
        Supercomputer: 

        Knowledge Base

        This knowledge base is a collection of important, useful information about OSC systems that does not fit into a guide or tutorial, and is too long to be answered in a simple FAQ.

        Account Consolidation Guide

        Initial account consolidation took place during the July 17th, 2018 downtime
        Please contact OSC Help if you need further information. 

        Single Account / Multiple Projects

        If you work with several research groups, you had a separate account for each group. This meant multiple home directories, multiple passwords, etc. Over the years there have been requests for a single login system. We've now put that in place.

        How will this affect you?

        If you work with multiple groups, you'll need to be aware of how this works.

        • It will be very important to use the correct project code for batch job charging.
        • Managing the sharing of files between your projects (groups) is a little more complicated.
        • In most cases, you will only need to fill out software license agreements once.

        The single username 

        We requested those with multiple accounts to choose a preferred username. If one was not selected by the user, we selected one for them. 

        The preferred username will be your only active account; you will not be able to log in or submit jobs with the other accounts. 

        Checking the groups of a username

        To check all groups of a username (USERID), use the command:

        groups USERID
        

        or

        OSCfinger USERID

        The first one from the output is your primary group, which is the project code (PROJECTID) this username (USERID) was created under.

        All project codes your user account is under is determined by the groups displayed. One can also use the OSC Client Portal to look at their current projects.

        A user may not be a member of the project, even though the user is still in the group for that project. This is because a primary group will not be removed when a user is removed from their first project. OSCfinger will list a primary group and project groups separately (if a user the primary group, but the project is not listed in the 'groups' sectionm then they are not in that project). OSC Client portal will also show current project members.

        Changing the primary group for a login session

        You can change the primary group of your username (USERID) to any UNIX group (GROUP) that username (USERID) belongs to during the login session using the command:

        newgrp GROUP
        

        This change is only valid during this login session. If you log out and log back in, your primary group is changed back to the default one.

        Check previous user accounts

        There is no available tool to check all of your previous active accounts. We sent an email to each impacted user providing the information on your preferred username and previous accounts. Please refer to that email (sent on July 11, subject "Multiple OSC Accounts - Your Single Username").

        Batch job

        How to specify the charging project

        It will be very important that you make sure a batch job is charged against the correct research project code.

        Specify a project to charge the job to using the -A flag. e.g. The following example will charge to project PAS1234.

        #SBATCH -A PAS1234

        Batch limits policy

        The job limit per user remains the same. That is to say, though your jobs are charged against different project codes, the total number of jobs and cores your user account can use on each system is still restricted by the previous user-based limit. Therefore, consolidating multiple user accounts into one preferred user account may affect the work of some users.

        Please check our batch limit policy on each system for more details.

        Data Management

        Managing multiple home directories

        Data from your non-preferred accounts will remain in those home directories; the ownership of the files will be updated to your preferred username, the newly consolidated account. You can access your other home directories using the command cd /absolute/path/to/file

        You will need to consolidate all files to your preferred username as soon as possible because we plan to purge the data in future. Please contact OSC Help if you need the information on your other home directories to access the files.  

        Previous files associated with your other usernames

        • Files associated with your non-preferred accounts will have their ownership changed to your preferred username. 
        • These files won't count against your home directory file quota. 
        • There will be no change to files and quotas on the project and scratch file systems.

        Change group of a file

        Log in with preferred username (P_ USERID) and create a new file of which the owner and group is your preferred username (P_ USERID) and primary project code (P_PROJECTID). Then change the group of the newly created file (FILE) using the command:

        chgrp PROJECTID FILE
        

        Managing file sharing in a batch job

        In the Linux file system, every file has an owner and a group. By default, the group (project code) assigned to a file is the primary group of the user who creates it. This means that even if you change the charged account for a batch job, any files created will still be associated with your primary group.

        To change the group for new files you will need to update your primary group prior to submitting your slurm script using the newgrp command.

        It is important to remember that groups are used in two different ways: for resource use charging and file permissions. In the simplest case, if you are a member of only one research group/project, you won't need either option above. If you are in multiple research groups and/or multiple projects, you may need something like:

        newgrp PAS0002
        sbatch -A PAS0002 myjob.sh
        

        OnDemand users

        If you use the OnDemand Files app to upload files to the OSC filesystem, the group ownership of uploaded files will be your primary group.

        Software licenses

        • We will merge all your current agreements if you have multiple accounts.  
        • In many cases, you will only need to fill out software license agreements once.
        • Some vendors may require you to sign an updated agreement.  
        • Some vendors may also require the PI of each of your research groups/project codes to sign an agreement.
        Supercomputer: 

        Changes of Default Memory Limits

        Problem Description

        Our current GPFS file system is a distributed process with significant interactions between the clients. As the compute nodes being GPFS flle system clients, a certain amount of memory of each node needs to be reserved for these interactions. As a result, the maximum physical memory of each node allowed to be used by users' jobs is reduced, in order to keep the healthy performance of the file system. In addition, using swap memory is not allowed.  

        The table below summarizes the maximum physical memory allowed for each type of nodes on our systems:

        Owens Cluster

        NODE TYPE PHYSICAL MEMORY per node MAXIMUM MEMORY ALLOWED per node
        Regular node 128GB 118GB
        Huge memory node 1536GB (1.5TB)

        1493GB

        Pitzer Cluster

        Node type physical memory per node Maximum memory allowed per Node 
        Regular node 192GB 178GB
        Dual GPU node 384GB 363GB
        Quad GPU node 768 GB 744 GB
        Large memory node 768 GB 744 GB
        Huge memory node 3072GB (3TB) 2989GB

        Solutions When You Need Regular Nodes

        If you do not request memory explicitly in your job (no --mem

        Your job can be submitted and scheduled as before, and resources will be allocated according to your requests for cores/nodes ( --nodes=XX --ntask=XX ).  If you request a partial node, the memory allocated to your job is proportional to the number of cores requested; if you request the whole node, the memory allocated to your job is based on the information summarized in the above tables.

        If you have a multi-node job (  nodes>1  ), your job will be assigned the entire nodes with maximum memory allowed per node and charged for the entire nodes regardless of --ntask request.

        If you do request memory explicitly in your job (with  --mem 

        If you request memory explicitly in your script, please re-visit your script according to the following pages:

        Pitzer: https://www.osc.edu/resources/technical_support/supercomputers/pitzer/batch_limit_rules 

        Owens: https://www.osc.edu/resources/technical_support/supercomputers/owens/batch_limit_rules 

        Supercomputer: 
        Service: 

        Community Accounts

        Some projects may wish to have a common account to allow for different privileges than their regular user accounts. These are called community accounts, in that they are shared among multiple users, belong to a project, and may be able to submit jobs. Community accounts are accessed using the sudo command.

        A community sudo account has the following characteristics:

        • Selected users in the project have sudo privileges to become the community sudo user.
        • The community sudo account has different privileges than the other users in the project, which may or may not include job submission.
        • Community accounts can not be used to SSH into OSC systems directly.
          • The community sudo account can only be accessed after logging in as a regular user and then using the sudo command described below. The community sudo account does not have a regular password set and is therefore is not subject to the normal password change policy.
          • SSH key exchange to access OSC systems from outside of OSC with community accounts is disabled. Key exchange may be used to SSH between hosts within an OSC cluster.

        How to Request a Community Account

        The PI of the project looking to create a community account needs to send an email to OSC Help with the following information:

        • A preferred username for the community account
        • The project code that the community account will be created under
        • The elevated privileges desired (such as job submission)
        • The users who will able to access the account via sudo
        • The desired shell for the community account

        OSC will then evaluate the request.

        Logging into a Community Account

        Users who have been given access to the community account by the PI will be able to use the following command to log in:

        sudo -u <community account name> /bin/bash 
        

        Once you successfully enter your own password you will assume the identity of the community account user.

        Submitting Jobs From a Community Account

        You can submit jobs the same as your normal user account. The email associated with the community account is noreply@osc.edu. Please add email recipients in your job script if you would like to receive notifications from the job.

        Add multiple email recipients in a job using

        #SBATCH --mail-user=<email address>
        

        Adding Users to a Community Account

        The PI of the project needs to send an email to OSC Help with the username of the person that they would like to add.

        Checking jobs in XDMoD

        To check the statistics of the jobs submitted by the community account in XDMoD, the PI of the project will need to send an email to OSC Help with the username of the community account.

        Data Management

        The owner of the data on the community account will be the community account user. Any user that has assumed the community account user identity will have access.

        Access via OnDemand

        The only way to access a community account is via a terminal session. This can be either via an SSH client or the terminal app within OnDemand. Other apps within OnDemand such as Desktops or specific software can not be utilized with a community account.

        Compilation Guide

        As a general recommendation, we suggest selecting the newest compilers available for a new project. For repeatability, you may not want to change compilers in the middle of an experiment.

        Pitzer Compilers

        The Skylake processors that make up the original Pitzer cluster and the Cascade Lake processors in its expansion support the AVX512 instruction set, but you must set the correct compiler flags to take advantage of it. AVX512 has the potential to speed up your code by a factor of 8 or more, depending on the compiler and options you would otherwise use.

        With the Intel compilers, use -xHost and -O2 or higher. With the gnu compilers, use -march=native and -O3. The PGI compilers by default use the highest available instruction set, so no additional flags are necessary.

        This advice assumes that you are building and running your code on Pitzer. The executables will not be portable.  Of course, any highly optimized builds, such as those employing the options above, should be thoroughly validated for correctness.

        Intel (recommended)

          NON-MPI MPI
        FORTRAN 90 ifort mpif90
        C icc mpicc
        C++ icpc mpicxx

        Recommended Optimization Options

        The   -O2 -xHost  options are recommended with the Intel compilers. (For more options, see the "man" pages for the compilers.

        OpenMP

        Add this flag to any of the above:  -qopenmp  

        PGI

          NON-MPI MPI
        FORTRAN 90 pgfortran   or   pgf90 mpif90
        C pgcc mpicc
        C++ pgc++ mpicxx

        Recommended Optimization Options

        The   -fast  option is appropriate with all PGI compilers. (For more options, see the "man" pages for the compilers)

        Note: The PGI compilers can generate code for accelerators such as GPUs. Description of these capabilities is beyond the scope of this guide.

        OpenMP

        Add this flag to any of the above:  -mp

        GNU

          NON-MPI MPI
        FORTRAN 90 gfortran mpif90
        C gcc mpicc
        C++ g++ mpicxx

        Recommended Optimization Options

        The  -O2 -march=native  options are recommended with the GNU compilers. (For more options, see the "man" pages for the compilers)

        OpenMP

        Add this flag to any of the above:  -fopenmp

         

        Owens Compilers

        The Haswell and Broadwell processors that make up Owens support the Advanced Vector Extensions (AVX2) instruction set, but you must set the correct compiler flags to take advantage of it. AVX2 has the potential to speed up your code by a factor of 4 or more, depending on the compiler and options you would otherwise use.

        With the Intel compilers, use -xHost and -O2 or higher. With the gnu compilers, use -march=native and -O3. The PGI compilers by default use the highest available instruction set, so no additional flags are necessary.

        This advice assumes that you are building and running your code on Owens. The executables will not be portable. Of course, any highly optimized builds, such as those employing the options above, should be thoroughly validated for correctness.

        Intel (recommended)

          NON-MPI MPI
        FORTRAN 90 ifort mpif90
        C icc mpicc
        C++ icpc mpicxx

        Recommended Optimization Options

        The   -O2 -xHost  options are recommended with the Intel compilers. (For more options, see the "man" pages for the compilers.

        OpenMP

        Add this flag to any of the above:  -qopenmp  or  -openmp

        PGI

          NON-MPI MPI
        FORTRAN 90 pgfortran   or   pgf90 mpif90
        C pgcc mpicc
        C++ pgc++ mpicxx

        Recommended Optimization Options

        The   -fast  option is appropriate with all PGI compilers. (For more options, see the "man" pages for the compilers)

        Note: The PGI compilers can generate code for accelerators such as GPUs. Description of these capabilities is beyond the scope of this guide.

        OpenMP

        Add this flag to any of the above:  -mp

        GNU

          NON-MPI MPI
        FORTRAN 90 gfortran mpif90
        C gcc mpicc
        C++ g++ mpicxx

        Recommended Optimization Options

        The  -O2 -march=native  options are recommended with the GNU compilers. (For more options, see the "man" pages for the compilers)

        OpenMP

        Add this flag to any of the above:  -fopenmp

        Further Reading:

        Intel Compiler Page

        PGI Compiler Page

        GNU Complier Page

        Supercomputer: 
        Technologies: 
        Fields of Science: 

        Firewall and Proxy Settings

        Connections to OSC

        In order for users to access OSC resources through the web your firewall rules should allow for connections to the following publicly-facing IP ranges.  Otherwise, users may be blocked or denied access to our services.

        • 192.148.248.0/24
        • 192.148.247.0/24
        • 192.157.5.0/25

        The followingg TCP ports should be opened:

        • 80 (HTTP)
        • 443 (HTTPS)
        • 22 (SSH)

        The following domain should be allowed:

        • *.osc.edu

        Users may follow the instructions below "Test your configuration" to ensure that your system is not blocked from accessing our services. If you are still unsure of whether their network is blocking theses hosts or ports should contact their local IT administrator.

        Test your configuration

        [Windows] Test your connection using PuTTY

        1. Open the PuTTY application.
        2. Enter IP address listed in "Connections to OSC" in the "Host Name" field.
        3. Enter 22 in the "Port" field.
        4. Click the 'Telnet' radio button under "Connection Type".
        5. Click "Open" to test the connection.
        6. Confirm the response. If the connection is successful, you will see a message that says "SSH-2.0-OpenSSH_5.3", as shown below. If you receive a PuTTY error, consult your system administrator for network access troubleshooting.

        putty

        [OSX/Linux] Test your configuration using telnet

        1. Open a terminal.
        2. Type telnet IPaddress 22 (Here, IPaddress is IP address listed in "Connections to OSC").
        3. Confirm the connection. 

        Connections from OSC

        All outbound network traffic from all of OSC's compute nodes are routed through a network address translation host (NAT) including the following IPs:

        • 192.148.249.248
        • 192.148.249.249
        • 192.148.249.250
        • 192.148.249.251

        IT and Network Administrators

        Please use the above information in order to assit users in acessing our resources.  

        Occasionally new services may be stood up using hosts and ports not described here.  If you believe our list needs correcting please let us know at oschelp@osc.edu.

        Supercomputer: 
        Service: 

        Job and storage charging

        Ohio academics should visit the fee structure page for pricing information.
        All others should contact OSC Sales for pricing information.
        If there are questions/concerns on charging at OSC, please contact OSC Help.

        Job charging based on usage

        Jobs are charged based length, number of cores, amount of memory, single node versus multi-node, and type of resource.

        Length and number of cores

        Jobs are recorded in terms of core-hours hours used. Core-hours can be calculated by:

        number of cores * length of job

        e.g.

        A 4 core job that runs for 2 hours would have a total core-hour usage of:

        4 cores * 2 hours = 8 core-hours

        Amount of Memory

        Each processor has a default amount of memory paired along with it, which differs by cluster. When requesting a specifc amount of memory that doesn't correlate with the default pairing, the charging uses an algorithm to determine if the effective cores should be used.

        The value for effective cores will be used in place of the actual cores used if and only if it is larger than the explicit number of cores requested.

        effective cores = memory / memory per core

        e.g.

        A job that requests  nodes=1:ppn=3  will still be charged for 3 cores of usage.

        However, a job that requests  nodes=1:ppn=1,mem=12GB, where the default memory allocated per core is 4GB, then the job will be charged for 3 cores worth of usage.

        effective cores = 12GB / (4GB/core) = 3 core
        

        Single versus Multi-Node

        If requesting a single node, then a job is charged for only the cores/processors requested. However, when requesting multiple nodes the job is charged for each entire node regardless of the number of cores/processors requested.

        Type of resource

        Depending on the type of node requested, it can change the dollar rate charged per core-hour. There are currently three types of nodes, regular, hugememory,and gpu.

        If a gpu node is used, there are two metrics recorded, core-hours and gpu-hours. Each has a different dollar-rate, and these are combined to determine the total charges for usage.

        Ohio academics should visit the fee structure page for pricing information.
        All others should contact OSC Sales for pricing information.

        e.g.

        A job requests nodes=1:ppn=8:gpus=2 and runs for 1 hour.

        The usage charge would be calculated using:

        8 cores * 1 hour = 8 core-hours

        and

        2 gpus * 1 hour = 2 gpu-hours
        

        and combined for:

        8 core-hours + 2 gpu-hours

        Project storage charging based on quota

        Projects that request extra storage be added are charged for that storage based on the total space reserved (i.e. your quota). 

        The rates are in TB per month:

        storage quota in TB * rate per month
        Ohio academics should visit the fee structure page for pricing information.
        All others should contact OSC Sales for pricing information.
        Please contact OSC Help with questions/concerns.

        Out-of-Memory (OOM) or Excessive Memory Usage

        Problem description

        A common problem on our systems is that a user's job causes a node out of memory or uses more than its allocated memory if the node is shared with other jobs.

        If a job exhausts both the physical memory and the swap space on a node, it causes the node to crash. With a parallel job, there may be many nodes that crash. When a node crashes, the OSC staff has to manually reboot and clean up the node. If other jobs were running on the same node, the users have to be notified that their jobs failed.

        If your job requests less than a full node, for example, --ntasks-per-node=4, it may be scheduled on a node with other running jobs. In this case, your job is entitled to a memory allocation proportional to the number of cores requested. For example, if a system has 4.5 GB per core and you request one core, it is your responsibility to make sure your job uses no more than 4.5 GB. Otherwise your job will interfere with the execution of other jobs.

        Example errors

        # OOM in a parallel program launched through srun

        slurmstepd: error: Detected 1 oom-kill event(s) in StepId=14604003.0 cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler.
        
        srun: error: o0616: task 0: Out Of Memory

        # OOM in program run directly by the batch script of a job

        slurmstepd: error: Detected 1 oom-kill event(s) in StepId=14604003.batch cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler.

        Background

        Each node has a fixed amount of physical memory and a fixed amount of disk space designated as swap space. If your program and data don’t fit in physical memory, the virtual memory system writes pages from physical memory to disk as necessary and reads in the pages it needs. This is called swapping. 

        You can find the amount of usable memory on our system at default memory limits. You can see the memory and swap values for a node by running the Linux command free on the node.

        In the world of high-performance computing, swapping is almost always undesirable. If your program does a lot of swapping, it will spend most of its time doing disk I/O and won’t get much computation done. Swapping is not supported at OSC. Please consider the suggestions below.

        Suggested solutions

        Here are some suggestions for fixing jobs that use too much memory. Feel free to contact OSC Help for assistance with any of these options.

        Some of these remedies involve requesting more processors (cores) for your job. As a general rule, we require you to request a number of processors proportional to the amount of memory you require. You need to think in terms of using some fraction of a node rather than treating processors and memory separately. If some of the processors remain idle, that’s not a problem. Memory is just as valuable a resource as processors.

        Request whole node or more processors

        Jobs requesting less than a whole node are those that request less than the total number of available cores. These jobs can be problematic for two reasons. First, they are entitled to use an amount of memory proportional to the cores requested; if they use more they interfere with other jobs. Second, if they cause a node to crash, it typically affects multiple jobs and multiple users.

        If you’re sure about your memory usage, it’s fine to request just the number of processors you need, as long as it’s enough to cover the amount of memory you need. If you’re not sure, play it safe and request all the processors on the node.

        Reduce memory usage

        Consider whether your job’s memory usage is reasonable in light of the work it’s doing. The code itself typically doesn’t require much memory, so you need to look mostly at the data size.

        If you’re developing the code yourself, look for memory leaks. In MATLAB look for large arrays that can be cleared.

        An out-of-core algorithm will typically use disk more efficiently than an in-memory algorithm that relies on swapping. Some third-party software gives you a choice of algorithms or allows you to set a limit on the memory the algorithm will use.

        Use more nodes for a parallel job

        If you have a parallel job you can get more total memory by requesting more nodes. Depending on the characteristics of your code you may also need to run fewer processes per node.

        Here’s an example. Suppose your job on Pitzer includes the following lines:

        #SBATCH --nodes=2
        #SBATCH --ntasks-per-node=48
        …
        mpiexec mycode

        This job has 2 nodes worth of memory available to it (specifically 178GB * 2 nodes of memory). The mpiexec command by default runs one process per core, which in this case is 96 copies of mycode.

        If this job uses too much memory you can spread those 96 processes over more nodes. The following lines request 4 nodes, giving you a total of 712 GB of memory (4 nodes *178 GB). The -ppn 24 option on the mpiexec command says to run 24 processes per node instead of 48, for a total of 96 as before.

        #SBATCH --nodes=4
        #SBATCH --ntasks-per-node=48
        …
        mpiexec -ppn 24 mycode

        Since parallel jobs are always assigned whole nodes, the following lines will also run 24 processes per node on 4 nodes.

        #SBATCH --nodes=4
        #SBATCH --ntasks-per-node=24
        …
        mpiexec mycode

        Request large-memory nodes

        Pitzer has 4 huge memory nodes with ~3 TB of memory and with 80 cores. Owens has 16 huge memory nodes with ~1.5 TB of memory and with 48 cores.

        Since there are so few of these nodes, compared to hundreds of standard nodes, jobs requesting them will often have a long wait in the queue. The wait will be worthwhile, though, if these nodes solve your memory problem. See the batch limit pages for Owens and Pitzer to learn how to request huge or large memory nodes.

        How to monitor your memory usage

        Grafana

        If a job is currently running, or you know the timeframe that it was running, then grafana can be used to look at the individual nodes memory usage for jobs. Look for the graph that shows memory usage.

        OnDemand

        You can also view node status graphically using the OSC OnDemand Portal. Under "Jobs" select "Active Jobs." Click on "Job Status" and scroll down to see memory usage.

        XDMoD

        To view detailed metrics about jobs after waiting a day after the jobs are completed, you can use the XDMoD tool. It can show the memory usage for jobs over time as well as other metrics. Please see the job view how-to for more information on looking jobs.

        sstat

        Slurm command sstat can be used to obtain info for running jobs.

        sstat --format=AveRSS,JobID -j <job-id> -a

        During job

        Query the job's cgroup which is what controls the amount of memory a job can use:

        # return current memory usage
        cat /sys/fs/cgroup/memory/slurm/uid_$(id -u)/job_$SLURM_JOB_ID/memory.usage_in_bytes | numfmt --to iec-i
        # return memory limit
        cat /sys/fs/cgroup/memory/slurm/uid_$(id -u)/job_$SLURM_JOB_ID/memory.limit_in_bytes | numfmt --to iec-i
        

        Notes

        If it appears that your job is close to crashing a node, we may preemptively delete the job.

        If your job is interfering with other jobs by using more memory than it should be, we may delete the job.

        In extreme cases OSC staff may restrict your ability to submit jobs. If you crash a large number of nodes or continue to submit problematic jobs after we have notified you of the situation, this may be the only way to protect the system and our other users. If this happens, we will restore your privileges as soon as you demonstrate that you have resolved the problem.

        For details on retrieving files from unexpectedly terminated jobs see this FAQ.

        For assistance

        OSC has staff available to help you resolve your memory issues. See our client support request page for contact information.

        Supercomputer: 
        Service: 

        XDMoD Tool

        XDMoD Overview

        XDMoD, which stands for XD Metrics on Demand, is an NSF-funded open source tool that provides a wide range of metrics pertaining to resource utilization and performance of high-performance computing (HPC) resources, and the impact these resources have in terms of scholarship and research.

        How to log in

        Visit OSC's XDMoD (xdmod.osc.edu) and click 'Sign In' in the upper left corner of the page.

        screenshot of the XDMoD displaying the above text

        A login window will appear. Click the button 'Login here.' under the 'Sign in with Ohio SuperComputer Center:', as shown below:
        screenshot of the XDMoD displaying the above text
         
        This redirects to a login page where one can use their OSC credentials to sign in.
        screenshot of the XDMoD displaying the above text

        XDMoD Tabs

        When you first log in you will be directed to the Summary tab. The different XDMoD tabs are located near the top of the page. You will be able to change tabs simply by click on the one you would like to view. By default, you will see the data from the previous month, but you can change the start and end date and then click 'refresh' to update the timeframe being reported.

        screenshot of the XDMoD displaying the above text

        Summary:

        The Summary tab is comprised of a duration selector toolbar, a summary information bar, followed by a select set of charts representative of the usage. The Summary tab provides a dashboard that presents summary statistics and selected charts that are useful to the role of the current user. More information can be found at the XDMoD User Manual

        Usage:

        The Usage tab is comprised of a chart selection tree on the left, and a chart viewer to the right of the page. The usage tab provides a convenient way to browse all the realms present in XDMoD. More information can be found at the XDMoD User Manual

        Metric Explorer:

        The Metric Explorer allows one to create complex plots containing multiple multiple metrics. It has many points and click features that allow the user to easily add, filter, and modify the data and the format in which it is presented. More information can be found at the XDMoD User Manual

        App Kernels:

        The Application Kernels tab consists of three sub-tabs, and each has a specific goal in order to make viewing application kernels simple and intuitive. The three sub-tabs consist of the Application Kernels Viewer, Application Kernels Explorer, and the Reports subsidiary tabs. More information can be found at the XDMoD User Manual

        Report Generator:

        This tab will allow you to manage reports. The left region provides a listing of any reports you have created. The right region displays any charts you have chosen to make available for building a report. More information can be found at the XDMoD User Manual

        Job Viewer:

        The Job Viewer tab displays information about individual HPC jobs and includes a search interface that allows jobs to be selected based on a wide range of filters. This tab also contains the SUPReMM module. More information on the SUPReMM module can be found below in this documentation. More information can be found at the XDMoD User Manual

        About:

        This tab will display information about XDMoD.

        Different Roles

        XDMoD utilizes roles to restrict access to data and elements of the user interface such as tabs. OSC client holds the 'User Role' by default after you log into OSC XDMoD using your OSC credentials. With 'User Role', users are able to view all data available to their personal utilization information. They are also able to view information regarding their allocations, quality of service data via the Application Kernel Explorer, and generate custom reports. We also support the 'Principal Investigator' role, who has access to all data available to a user, as well as detailed information for any users included on their allocations or project.

        References, Resources, and Documentation

         

         

        Supercomputer: 

        Job Viewer

        The Job Viewer Tab displays information about individual HPC jobs and includes a search interface that allows jobs to be selected based on a wide range of filters:

        1. Click on the Job Viewer tab near the top of the page.

        2. Click Search in the top left-hand corner of the page

        screenshot of the XDMoD displaying the above text

             3. If you know the Resource and Job Number, use the quick search lookup form discussed in 4a. If you would like more options, use the advanced search discussed in 4b.

             4a. For a quick job lookup, select the resource and enter the job number and click 'Search'.

        screenshot of the XDMoD displaying the above text

             4b. Within the Advanced Search form, select a timeframe and Add one or more filters. Click to run the search on the server.

        screenshot of the XDMoD displaying the above text

             5. Select one or more Jobs. Provide the 'Search Name', and click 'Save Results' at the bottom of this window to view data about the selected jobs.

             6. To view data in more details for the selected job, under the Search History, click on the Tree and select a Job.

             7. More information can be found in the section of 'Job Viewer' of the XDMoD User Manual.

        Supercomputer: 

        XDMoD - Checking Job Efficiency

        Intro

        XDMoD can be used to look at the performance of past jobs. This tutorial will explain how to retreive this job performance data and how to use this data to best utilize OSC resources.

        First, log into XDMoD.

        See XDMoD Tool webpage for details about XDMoD and how to log in.

        You will be sent to the Summary Tab in XDMoD:

        Screen Shot 2019-03-28 at 11.04.53 AM.png

        Click on the Metric Explorer tab, then navigate to the Metric Catalog click SUPREMM to show the various metric options, then Click the "Avg CPU %: User: weighted by core hour " metric.

        A drop-down menu will appear for grouping the data to viewed. Group by "CPU User Value

        Screen Shot 2019-04-03 at 2.15.23 PM_0.png":

         

        This will provide a time-series chart showing the average 'CPU user % weighted by core hours, over all jobs that were executing' separated by groups of 10 for that 'CPU User value'.

        Screen Shot 2019-04-03 at 2.21.10 PM.png

        One can change the time period by adjusting the preset duration value or entering dates in the "start" and "end" boxes by selecting the calendar or manually entering dates in the format 'yyyy-mm-dd'. Once the desired time period is entered the "Refresh" button will be highlighted yellow, click the "Refresh" button to reload that time period data into the chart.

        Screen Shot 2019-03-28 at 11.38.25 AM.png

        Once the data is loaded, click on one of the data points, then navigate to "Drilldown" and select "Job Wall Time". This will group the job data by the amount of wall time used.

        Screen Shot 2019-04-03 at 2.28.30 PM.png

        Generally, the lower the CPU User Value, the less efficient that job was. This chart can now be used to go into some detailed information on specific jobs. Click one of the points again and select "Show raw data".

        Screen Shot 2019-03-28 at 3.24.50 PM.png

        This will bring up a list of jobs included in that data point. Click one of the jobs shown.

        Screen Shot 2019-03-28 at 3.25.21 PM.png

        After loading, this brings up the "Job Viewer" Tab for showing the details about the job selected.

        Screen Shot 2019-03-28 at 3.28.57 PM.png

        It is important to explain some information about the values immediately visible such as the "CPU User", "CPU User Balance" and "Memory Headroom".

        The "CPU User" section gives a ratio for the amount of CPU time used by the job during the time that job was executing, think of it as how much "work" the CPUs were doing doing execution.

        Screen Shot 2019-03-28 at 3.32.30 PM.png

        The "CPU User Balance" section gives a measure for how evenly spread the "work" was between all the CPUs that were allocated to this job while it was executing. (Work here means how well was the CPU utilized, and it is preferred that the CPUs be close to fully utilized during job execution.)

        Screen Shot 2019-03-28 at 3.32.44 PM.png

        Finally, "Memory Headroom" gives a measure for the amount of memory used for that job. It can be difficult to understand what a good value is here. Generally, it is recommended to not specifically request an amount of memory unless the job requires it. When making those memory requests, it can be beneficial to investigate the amount of memory that is actually used by the job and plan accordingly. Below, a value closer to 0 means a job used most of the memory allocated to it and a value closer to 1 means that the job used less memory than the job was allocated.

        Screen Shot 2019-03-28 at 3.32.55 PM.png

        This information is useful for better utilizing OSC resources by having better estimates of the resources that jobs may require.