How to Log In and Request Resources

How to Connect

  • SSH Method

To login to Owens in the Slurm environment, ssh to one of the following hostnames:

owens-slurm.osc.edu 
owens-login03.hpc.osc.edu 

You can either use an ssh client application or execute ssh on the command line in a terminal window as follows:

ssh <username>@owens-slurm.osc.edu
ssh <username>@owens-login03.hpc.osc.edu

From there, you are connected to the login node and have access to the compilers and other software development tools. You can run programs interactively or through batch requests. Please use batch jobs for any compute-intensive or memory-intensive work. See the following sections for details.

  • OnDemand Method

You can also login to Owens in the Slurm environment at OSC with our OnDemand tool. The first step is to log into OnDemand. Then once logged in you can access it by clicking on "Clusters", and then selecting ">_Owens SLURM Shell Access".

Instructions on how to connect to OnDemand can be found on the OnDemand documentation page.

Node Information

  • 42 "dense compute" nodes
    • Dual Intel Xeon E5-2680 v4 Broadwell
    • 28 cores per node @ 2.4GHz​
    • 128GB RAM
  • 10 GPU nodes
    • Dual Intel Xeon E5-2680 v4 Broadwell
    • 28 cores per node @ 2.4GHz
    • NVIDIA Tesla P100 (Pascal) GPUs with 16GB memory
    • 128GB RAM​
  • 2 huge memory nodes
    • Intel Xeon E5-4830 v3 Haswell
    • 48 cores per node @ 2.1GHz​
    • 1536GB RAM

Batch Specifics

Please remember that Slurm is used for job scheduling and resource management. Refer to this page for more information. We use Slurm syntax for all the discussions below. You may request the resources using PBS syntax because OSC enables PBS compatibility layer, see here for more information. 

Request normal dense compute node

To request normal dense compute node(s), you can use --nodes=N --ntasks-per-node=M

Use the following line in your job script to request 1 core of a normal compute node:

#SBATCH --nodes=1 --ntasks-per-node=1

Use the following line in your job script to request the whole node (28 cores):

#SBATCH --nodes=1 --ntasks-per-node=28

Request GPU node

To request GPU node(s), you can use --nodes=N --ntasks-per-node=M --gpus-per-node=G

Use the following line in your job script to request the whole node (28 cores) with 1 GPU:

#SBATCH --nodes=1 --ntasks-per-node=28 --gpus-per-node=1

Request Huge Memory node

To request a huge memory node, you specify the memory request larger than 118 GB in the request using --mem=xG

Use the following line in your job script to request the whole node (48 cores) with 600 GB memory of a large memory node:

#SBATCH --nodes=1 --ntasks=48 --mem=600G
Supercomputer: