Pitzer OS Upgrade and Early User Program

During the early access period, the programming environment and software packages will keep being updated; and the system may go down or jobs may be killed with little or no warning. If your work won't tolerate this level of instability, we recommend that you use other clusters instead.
This page is still under development, and information will be updated periodically. 

Pitzer OS upgrade

We are planning to upgrade the operating system on the Pitzer cluster from RHEL7 to RHEL9. This upgrade introduces several software-related changes compared to the RHEL7 environment used on the Pitzer and provides access to modern tools and libraries but may also require adjustments to your workflows. Please refer to

Key change

A key change is that you are now required to specify the module version when loading any modules. For example, instead of using module load intel, you must use module load intel/2021.10.0. Failure to specify the version will result in an error message. 

Below is an example message when loading gcc without specifying the version:

$ module load gcc
Lmod has detected the following error:  These module(s) or extension(s) exist but cannot be loaded as requested: "gcc".

You encountered this error for one of the following reasons:
1. Missing version specification: On Ascend, you must specify an available version.
2. Missing required modules: Ensure you have loaded the appropriate compiler and MPI modules.

Try: "module spider gcc" to view available versions or required modules.

If you need further assistance, please contact oschelp@osc.edu with the subject line "lmod error: gcc"
 

Who is eligible to participate in the early user program?

All OSC clients with active projects are welcome to join the Early User Program to test their workflows on the RHEL 9 environment in advance of the official launch, so you can continue your work without interruption after the cutover. Note that there will be no RHEL7 environment on Pitzer after the cutover. No application is required. 

Early user period

June 23 - July 27, 2025 (tentative)

How to log into Pitzer with RHEL9 during the early user program?

After the commencement of the early user program, 

  • SSH Method

To login to Pitzer with RHEL9 at OSC, ssh to the following hostname:

pitzer-rhel9.osc.edu  

You can either use an ssh client application or execute ssh on the command line in a terminal window as follows:

ssh <username>@pitzer-rhel9.osc.edu

From there, you will have access to the compilers and other software development tools on the RHEL9 environment. You can run programs interactively or through batch requests. Please use batch jobs for any compute-intensive or memory-intensive work. 

  • OnDemand Method

You can also use our OnDemand tool. The first step is to log into OnDemand. Then once logged in you can access it by clicking on "Clusters," and then selecting ">_Pitzer RHEL9 Shell Access".

    Scheduling policy during the early user program 

    • Memory limit

    It is strongly suggested to consider the memory use to the available per-core memory when users request OSC resources for their jobs. See the 'Memory Limit' discussion on this Pitzer batch limit page for more information.

    • Batch limit

    To test RHEL9 environment, please specifty the partition for the job, by either adding the flag --partition=<partition-name> to the sbatch command at submission time or adding this line to the job script:#SBATCH --partition=<partition-name>. For example, for any GPU node you can do --partition=gpu,gpu-exp
    Partition Max walltime limit Min job size Max job size Note
    cpu 1-00:00:00 (24 hours) 1 core 20 nodes

    Standard nodes: 40 cores per node without GPU

    cpu-exp 1-00:00:00 (24 hours) 1 core  36 nodes  Standard nodes: 48 cores per node without GPU
    gpu 1-00:00:00 (24 hours) 1 core  4 nodes Dual GPU nodes: 40 cores per node, 16GB V100s
    gpu-exp 1-00:00:00 (24 hours) 1 core 6 nodes Dual GPU nodes: 48 cores per node, 32GB V100s
    gpu-quad-rhel9 1-00:00:00 (24 hours) 1 core  1 node Quad GPU nodes, 32GB V100s
    debug-cpu 1:00:00 (1 hour) 1 core 2 nodes Standard nodes: 40 cores per node without GPU
    debug-exp 1:00:00 (1 hour) 1 core 2 nodes Standard nodes: 48 cores per node without GPU
    gpudebug* 1:00:00 (1 hour) 1 core 2 nodes
    • Dual GPU nodes: 40 cores per node, 16GB V100s
    • This partition will be changed during the cutover
    gpudebug-exp 1:00:00 (1 hour) 1 core 2 nodes Dual GPU nodes: 48 cores per node, 32GB V100s
    hugemem-rhel9 1-00:00:00 (24 hours) 1 core  1 node There is 1 huge memory node with RHEL9
    largemem-rhel9 1-00:00:00 (24 hours) 1 core 1 node There are 3 large memory nodes with RHEL9
    Total available nodes shown for Pitzer may fluctuate depending on the amount of currently operational nodes and nodes reserved for specific projects. 
    • Job limit

    See the 'Job/Core Limit' discussion on this Pitzer batch limit page for more information.

    How do the jobs get charged during the early user program?

    Jobs on Pitzer with RHEL9 environment are eligible for the early user program and will not be charged. 

    All jobs submitted after the early user program will be charged. The charge for core-hour and GPU-hour on Pitzer is the same as the standard compute core-hour and GPU-hour on Pitzer before the OS upgrade. Users can check the service costs page for more information. Please contact OSC Help if you have any questions about the charges. 

    How do I find my jobs submitted during the early user program?  

    For any queued or running jobs, you can check the job information with either Slurm commands (which are discussed here) or the OSC OnDemand Jobs app by clicking "Active Jobs" and choosing "Pitzer RHEL9" as the cluster name.

    For any completed jobs, you can check the job information using the OSC XDMoD Tool. Choose "Pitzer" as "Resource." Check here for more information on how to use XDMoD.

    How do I get help?

    Please feel free to contact OSC Help if you have any questions. 

    Supercomputer: