TIP: Remember to check the menu to the right of the page for related pages with more information about Pitzer's specifics.

OSC's Pitzer cluster being installed in late 2018 is a Dell-built, Intel® Xeon® processor-based supercomputer.


Photo of Pitzer Cluster

Detailed system specifications:

  • 260 Dell Nodes
  • Dense Compute
    • 224 compute nodes (Dell PowerEdge C6420 two-socket servers with Intel Xeon 6148 (Skylake, 20 cores, 2.40GHz) processors, 192GB memory)

  • GPU Compute

    • 32 GPU compute nodes -- Dell PowerEdge R740 two-socket servers with Intel Xeon 6148 (Skylake, 20 cores, 2.40GHz) processors, 384GB memory

    • 2 NVIDIA Volta V100 GPUs -- 16GB memory

  • Analytics

    • 4 huge memory nodes (Dell PowerEdge R940 four-socket server with Intel Xeon 6148 (Skylake 20 core, 2.40GHz) processors, 3TB memory, 2 x 1TB drives mirrored - 1TB usable)

  • 10,560 total cores
    • 40 cores/node & 192GB of memory/node
  • Mellanox EDR (100Gbps) Infiniband networking
  • Theoretical system peak performance
    • 720 TFLOPS (CPU only)
  • 4 login nodes:
    • Intel Xeon 6148 (Skylake) CPUs
    • 40 cores/node and 384GB of memory/node

How to Connect

  • SSH Method

To login to Pitzer at OSC, ssh to the following hostname:


You can either use an ssh client application or execute ssh on the command line in a terminal window as follows:

ssh <username>@pitzer.osc.edu

You may see warning message including SSH key fingerprint. Verify that the fingerprint in the message matches one of the SSH key fingerprint listed here, then type yes.

From there, you are connected to Pitzer login node and have access to the compilers and other software development tools. You can run programs interactively or through batch requests. We use control groups on login nodes to keep the login nodes stable. Please use batch jobs for any compute-intensive or memory-intensive work. See the following sections for details.

  • OnDemand Method

You can also login to Pitzer at OSC with our OnDemand tool. The first step is to login to OnDemand. Then once logged in you can access Pitzer by clicking on "Clusters", and then selecting ">_Pitzer Shell Access".

Instructions on how to connect to OnDemand can be found at the OnDemand documention page.

File Systems

Pitzer accesses the same OSC mass storage environment as our other clusters. Therefore, users have the same home directory as on the old clusters. Full details of the storage environment are available in our storage environment guide.

Software Environment

The module system on Pitzer is the same as on the Owens and Ruby systems. Use  module load <package>  to add a software package to your environment. Use  module list  to see what modules are currently loaded and  module avail  to see the modules that are available to load. To search for modules that may not be visible due to dependencies or conflicts, use  module spider . By default, you will have the batch scheduling software modules, the Intel compiler and an appropriate version of mvapich2 loaded.

You can keep up to on the software packages that have been made available on Pitzer by viewing the Software by System page and selecting the Pitzer system.

Compiling Code to Use Advanced Vector Extensions (AVX2)

The Skylake processors that make up Pitzer support the Advanced Vector Extensions (AVX2) instruction set, but you must set the correct compiler flags to take advantage of it. AVX2 has the potential to speed up your code by a factor of 4 or more, depending on the compiler and options you would otherwise use.

In our experience, the Intel and PGI compilers do a much better job than the gnu compilers at optimizing HPC code.

With the Intel compilers, use -xHost and -O2 or higher. With the gnu compilers, use -march=native and -O3 . The PGI compilers by default use the highest available instruction set, so no additional flags are necessary.

This advice assumes that you are building and running your code on Pitzer. The executables will not be portable.  Of course, any highly optimized builds, such as those employing the options above, should be thoroughly validated for correctness.

See the Pitzer Programming Environment page for details.

Batch Specifics

Refer to the documentation for our batch environment to understand how to use PBS on OSC hardware. Some specifics you will need to know to create well-formed batch scripts:

  • The qsub syntax for node requests is the same on Pitzer as on Owens and Oakley
  • Most compute nodes on Pitzer have 40 cores/processors per node (ppn).  Huge-memory (analytics) nodes have 80 cores/processors per node.
Due to the ambiguity of requesting a node with 80 cores, one must also request 3TB of memory for the huge memory node job to be accepted by the scheduler. e.g. #PBS -l nodes=1:ppn=80,mem=3000GB
  • Jobs on Pitzer may request partial nodes.  This is in contrast to Ruby but similar to Owens.
  • Pitzer has 6 debug nodes which are specifically configured for short (< 1 hour) debugging type work.  These nodes have a walltime limit of 1 hour.
    • To schedule a debug node:
      #PBS -l nodes=1:ppn=40 -q debug

Using OSC Resources

For more information about how to use OSC resources, please see our guide on batch processing at OSC. For specific information about modules and file storage, please see the Batch Execution Environment page.

Pitzer Known Issues (unresolved)

Title Category Description Posted Updated
cuda-gdb segmentation fault on startup Owens, Pitzer, Software

The CUDA debugger, cuda-gdb, can raise a segmentation fault immediately upon execution.  A workaround before executing cuda-gdb is to unload the xalt module, e.g.: 

module unload... Read more          
2 months 1 week ago 2 months 1 week ago
Jobs reports 'excessive memory usage' message Owens, Pitzer

... Read more

3 months 3 weeks ago 2 months 2 weeks ago
Error 'libim_client.so: undefined reference to uuid@' with MVAPICH2 in Conda environment Owens, Pitzer, Software

Users may encoutner an error like 'libim_client.so: undefined reference to `uuid_unparse@UUID_1.0' while compiling MPI applications with mvapich2 in some Conda enivronments. We found pre-installed... Read more

3 months 2 weeks ago 3 months 2 weeks ago
Incorrect MPI launcher and compiler wrappers with Conda environments python/2.7-conda5.2 and python/3.6-conda5.2 Owens, Pitzer, Ruby, Software

Users may encounter under-performing MPI jobs or failures of compiling MPI applications if you are using Conda from system. We found pre-installed mpich2 package in some Conda environments ... Read more

3 months 2 weeks ago 3 months 2 weeks ago
Large MPI job startup hang with mvapich2/2.3 and mvapich2/2.3.1 Owens, Pitzer, Software

We have found that large MPI jobs may hang at startup with mvapich2/2.3 and mvapich/2.3.1 (on any compiler dependency) due to a known bug that has been fixed in release 2... Read more

7 months 3 weeks ago 7 months 3 weeks ago
Gaussian g16b01 G4 problem Owens, Pitzer, Software

Gaussian-4 (G4) theory calculations in Gaussian 16 Rev. B.01 can produce erratic results.  A workaround is to use Gaussian 16 Rev. A.03 or Rev. C.01, e.g.: 

module load gaussian/... Read more          
1 year 5 months ago 9 months 1 week ago
Error when downloading SRA data on computing nodes Owens, Pitzer, Software

NCBI blocks any connection from computing nodes because they are behind firewalls. Thus OSC users cannot use SRA tools to download data "on-the-fly" at runtime on computing nodes, e.g. 'fastq-dump... Read more

11 months 2 weeks ago 11 months 2 weeks ago

Pitzer Changelog