Search Documentation

Search Documentation

Pitzer
Beginning April 30, all College of Medicine (CoM) projects without a separate MOU established with OSC will be limited to running exclusively on the Ascend cluster, specifically on the nextgen partition. All CoM jobs run on Ascend’s nextgen partition at zero cost, with priority scheduling.
Pitzer

This document is obsoleted and kept as a reference to previous Pitzer programming environment. Please refer to here for the latest version.

Pitzer

In late 2018, OSC installed 260 Intel® Xeon® 'Skylake' processor-based nodes as the original Pitzer cluster. In September 2020, OSC installed additional 398 Intel® Xeon® 'Cascade Lake' processor-based nodes as part of a Pitzer Expansion cluster.

Owens, Pitzer

This page documents the known issues for migrating jobs from Torque to Slurm.

$PBS_NODEFILE and $SLURM_JOB_NODELIST

Please be aware that $PBS_NODEFILE is a file while $SLURM_JOB_NODELIST is a string variable. 

The analog on Slurm to cat $PBS_NODEFILE is srun hostname | sort -n 

Ascend, Cardinal, Pitzer

How to Submit Interactive jobs

There are different ways to submit interactive jobs.

Using sinteractive

You can use the custom tool sinteractive as:

Using ngbrader in Jupyter

Install nbgrader

You can install nbgrader in a notebook:

OSC provide an isolated and custom Jupyter environment for each classroom project that requires Jupyter Notebook or JupyterLab. 

Ascend, Cardinal, Pitzer

Submit Jobs

Use Torque/Moab Command Slurm Equivalent
Submit batch job qsub <jobscript> sbatch <jobscript>
Submit interactive job qsub -I [options]

sinteractive [options]

salloc [options]

Owens, Pitzer

Pages