OSC will replace the Ethernet switches in the Owens cluster starting from Dec 14

Rolling reboots of Owens and Pitzer, starting from Tuesday, Jan 22, 2019

Login nodes on Owens are not available noon-3pm Jan 17

Pitzer

TIP: Remember to check the menu to the right of the page for related pages with more information about Pitzer's specifics.

OSC's Pitzer cluster being installed in late 2018 is a Dell-built, Intel® Xeon® processor-based supercomputer.

Hardware

Detailed system specifications:

  • 260 Dell Nodes
  • Dense Compute
    • 224 compute nodes (Dell PowerEdge C6420 two-socket servers with Intel Xeon 6148 (Skylake, 20 cores, 2.40GHz) processors, 192GB memory)

  • GPU Compute

    • 32 GPU compute nodes -- Dell PowerEdge R740 two-socket servers with Intel Xeon 6148 (Skylake, 20 cores, 2.40GHz) processors, 384GB memory

    • 2 NVIDIA Volta V100 GPUs -- 16GB memory

  • Analytics

    • 4 huge memory nodes (Dell PowerEdge R940 four-socket server with Intel Xeon 6148 (Skylake 20 core, 2.40GHz) processors, 3TB memory, 2 x 1TB drives mirrored - 1TB usable)

  • 10,560 total cores
    • 40 cores/node & 192GB of memory/node
  • Mellanox EDR (100Gbps) Infiniband networking
  • Theoretical system peak performance
    • 720 TFLOPS (CPU only)
  • 4 login nodes:
    • Intel Xeon 6148 (Skylake) CPUs
    • 40 cores/node and 384GB of memory/node

How to Connect

  • SSH Method

To login to Pitzer at OSC, ssh to the following hostname:

pitzer.osc.edu 

You can either use an ssh client application or execute ssh on the command line in a terminal window as follows:

ssh <username>@pitzer.osc.edu

You may see warning message including SSH key fingerprint. Verify that the fingerprint in the message matches one of the SSH key fingerprint listed here, then type yes.

From there, you are connected to Pitzer login node and have access to the compilers and other software development tools. You can run programs interactively or through batch requests. We use control groups on login nodes to keep the login nodes stable. Please use batch jobs for any compute-intensive or memory-intensive work. See the following sections for details.

  • OnDemand Method

You can also login to Pitzer at OSC with our OnDemand tool. The first step is to login to OnDemand. Then once logged in you can access Pitzer by clicking on "Clusters", and then selecting ">_Pitzer Shell Access".

Instructions on how to connect to OnDemand can be found at the OnDemand documention page.

File Systems

Pitzer accesses the same OSC mass storage environment as our other clusters. Therefore, users have the same home directory as on the Owens and Ruby clusters. Full details of the storage environment are available in our storage environment guide.

Home directories should be accessed through either the $HOME environment variable or the tilde notation ( ~username ). Project directories are located at /fs/project . Scratch storage is located at /fs/scratch .

Software Environment

The module system on Pitzer is the same as on the Owens and Ruby systems. Use  module load <package>  to add a software package to your environment. Use  module list  to see what modules are currently loaded and  module avail  to see the modules that are available to load. To search for modules that may not be visible due to dependencies or conflicts, use  module spider . By default, you will have the batch scheduling software modules, the Intel compiler and an appropriate version of mvapich2 loaded.

You can keep up to on the software packages that have been made available on Pitzer by viewing the Software by System page and selecting the Pitzer system.

Compiling Code to Use Advanced Vector Extensions (AVX2)

The Skylake processors that make up Pitzer support the Advanced Vector Extensions (AVX2) instruction set, but you must set the correct compiler flags to take advantage of it. AVX2 has the potential to speed up your code by a factor of 4 or more, depending on the compiler and options you would otherwise use.

In our experience, the Intel and PGI compilers do a much better job than the gnu compilers at optimizing HPC code.

With the Intel compilers, use -xHost and -O2 or higher. With the gnu compilers, use -march=native and -O3 . The PGI compilers by default use the highest available instruction set, so no additional flags are necessary.

This advice assumes that you are building and running your code on Pitzer. The executables will not be portable.

See the Pitzer Programming Environment page for details.

Batch Specifics

Refer to the documentation for our batch environment to understand how to use PBS on OSC hardware. Some specifics you will need to know to create well-formed batch scripts:

  • The qsub syntax for node requests is the same on Pitzer as on Owens and Oakley
  • Most compute nodes on Pitzer have 40 cores/processors per node (ppn).  Huge-memory (analytics) nodes have 80 cores/processors per node.
  • Jobs on Pitzer may request partial nodes.  This is in contrast to Ruby but similar to Owens.
  • Pitzer has 6 debug nodes which are specifically configured for short (< 1 hour) debugging type work.  These nodes have a walltime limit of 1 hour.
    • To schedule a debug node:
      #PBS -l nodes=1:ppn=40 -q debug

Using OSC Resources

For more information about how to use OSC resources, please see our guide on batch processing at OSC. For specific information about modules and file storage, please see the Batch Execution Environment page.

Pitzer Changelog