OSC will replace the Ethernet switches in the Owens cluster starting from Dec 14

Migrating jobs from other clusters

This page includes a summary of differences to keep in mind when migrating jobs from other clusters to Pitzer. 

Guidance for Oakley Users

Hardware Specifications

  Pitzer (per node) oakley (per node)
Most compute node 40 cores and 192GB of RAM 12 cores and 48GB of RAM
Large memory node    

12 cores and 192GB of RAM

(8 nodes in this class)

Huge memory node

80 cores and 3.0 TB of RAM

(4 nodes in this class)

32 cores and 1.0TB of RAM

(1 node in this class)

File Systems

Owens accesses the same OSC mass storage environment as our other clusters. Therefore, users have the same home directory as on the Oakley cluster.

    pitzer Oakley
Home directories Accessed through either the  $HOME  environment variable or the tilde notation ( ~username )

Do NOT have symlinks allowing use of the old file system paths.

Please modify your script with the new paths before you submit jobs to Pitzer cluster

 

 

Have the symlinks allowing use of the old file system paths. 

No action is required on your part to continue using your existing job scripts on Oakley cluster

 

 

 

Project directories Located at  /fs/project
Scratch storage Located at  /fs/scratch

See the 2016 Storage Service Upgrades page for details. 

Software Environment

Pitzer uses the same module system as Oakley.

Use   module load <package to add a software package to your environment. Use   module list   to see what modules are currently loaded and  module avail   to see the modules that are available to load. To search for modules that may not be visible due to dependencies or conflicts, use   module spider 

You can keep up to on the software packages that have been made available on Pitzer by viewing the Software by System page and selecting the Pitzer system.

Programming Environment

Like Oakley, Pitzer supports three compilers: Intel, PGI, and gnu. The default is Intel. To switch to a different compiler, use  module swap intel gnu  or  module swap intel pgi

Pitzer also use the MVAPICH2 implementation of the Message Passing Interface (MPI), optimized for the high-speed Infiniband interconnect. In addition, Pitzer support the Advanced Vector Extensions (AVX2) instruction set, but you must set the correct compiler flags to take advantage of it. 

See the Pitzer Programming Environment page for details.

PBS Batch-Related Command

qpeek Command is not needed on Pitzer. 

On Oakley, a job’s stdout and stderr data streams, which normally show up on the screen, are written to log files. These log files are stored on a server until the job ends, so you can’t look at them directly. The  qpeek  command allows you to peek at their contents. If you used the PBS header line to join the stdout and stderr streams ( #PBS -j oe ), the two streams are combined in the output log.

On Pitzer, a job’s stdout and stderr data streams are written to log files stored on the current working directory, i.e. $PBS_O_WORKDIR . You will see the log files immediately after your job get started. 

In addition, preemption job and hyper-threading job are supported on Pitzer. See this page for more information. 

Accounting

The Pitzer cluster will be charged at a rate of 1 RU per 10 core-hours.

The Oakley cluster will be charged at a rate of 1 RU per 20 core-hours.

Like Oakley, Pitzer will accept partial-node jobs and charge you for the number of cores proportional to the amount of memory your job requests.

Below is a comparison of job limits between Pitzer and Oakley:

  Pitzer oakley
Per User Up to 128 concurrently running jobs and/or up to 2040 processors/cores in use  Up to 256 concurrently running jobs and/or up to 2040 processors/cores in use
Per group Up to 192 concurrently running jobs and/or up to 2040 processors/cores in use Up to 384 concurrently running jobs and/or up to 2040 processors/cores in use

Please see Queues and Reservations for Pitzer and Batch Limit Rules for more details.

Guidance for Owens Users

Hardware Specifications

  pitzer (PER NODE) owens (PER NODE)
Most compute node 40 cores and 192GB of RAM 28 cores and 125GB of RAM
Huge memory node

80 cores and 3.0 TB of RAM

(4 nodes in this class)

48 cores and 1.5TB of RAM, 12 x 2TB drives

(16 node in this class)

File Systems

Pitzer accesses the same OSC mass storage environment as our other clusters. Therefore, users have the same home directory, project space, and scratch space as on the Owens cluster.

Software Environment

Pitzer uses the same module system as Owens.

Use   module load <package to add a software package to your environment. Use   module list   to see what modules are currently loaded and  module avail   to see the modules that are available to load. To search for modules that may not be visible due to dependencies or conflicts, use   module spider 

You can keep up to on the software packages that have been made available on Pitzer by viewing the Software by System page and selecting the Pitzer system.

Programming Environment

Like Owens, Pitzer supports three compilers: Intel, PGI, and gnu. The default is Intel. To switch to a different compiler, use  module swap intel gnu  or  module swap intel pgi

Pitzer also use the MVAPICH2 implementation of the Message Passing Interface (MPI), optimized for the high-speed Infiniband interconnect and support the Advanced Vector Extensions (AVX2) instruction set.

See the Pitzer Programming Environment page for details.

PBS Batch-Related Command

Like on Owens, a job’s stdout and stderr data streams on Pitzer are written to log files stored on the current working directory, i.e. $PBS_O_WORKDIR . You will see the log files immediately after your job get started. 

In addition, preemption job and hyper-threading job are supported on Pitzer. See this page for more information. 

Accounting

The same as on Owens, the Pitzer cluster will charged at a rate of 1 RU per 10 core-hours. Below is a comparison of job limits between Pitzer and Owens:

  PItzer Owens
Per User Up to 128 concurrently running jobs and/or up to 2040 processors/cores in use  Up to 256 concurrently running jobs and/or up to 3080 processors/cores in use
Per group Up to 192 concurrently running jobs and/or up to 2040 processors/cores in use Up to 384 concurrently running jobs and/or up to 4620 processors/cores in use

Please see Queues and Reservations for Pitzer and Batch Limit Rules for more details.

 

Guidance for Ruby Users

Hardware Specifications

  pitzer (PER NODE) RUBY (PER NODE)
Most compute node 40 cores and 192GB of RAM 20 cores and 64GB of RAM
Huge memory node

80 cores and 3.0 TB of RAM

(4 nodes in this class)

32 cores and 1TB of RAM 

(1 node in this class)

File Systems

Pitzer accesses the same OSC mass storage environment as our other clusters. Therefore, users have the same home directory as on the Ruby cluster.

    pitzer RUBY
Home directories Accessed through either the  $HOME  environment variable or the tilde notation ( ~username )

Do NOT have symlinks allowing use of the old file system paths.

Please modify your script with the new paths before you submit jobs to Pitzer cluster

 

 

Have the symlinks allowing use of the old file system paths. 

No action is required on your part to continue using your existing job scripts on Ruby cluster

 

 

 

Project directories Located at  /fs/project
Scratch storage Located at  /fs/scratch

See the 2016 Storage Service Upgrades page for details. 

Software Environment

Pitzer uses the same module system as Ruby.

Use   module load <package>   to add a software package to your environment. Use   module list   to see what modules are currently loaded and  module avail   to see the modules that are available to load. To search for modules that may not be visible due to dependencies or conflicts, use   module spider  

You can keep up to on the software packages that have been made available on Pitzer by viewing the Software by System page and selecting the Pitzer system.

Programming Environment

Like Ruby, Pitzer supports three compilers: Intel, PGI, and gnu. The default is Intel. To switch to a different compiler, use  module swap intel gnu  or  module swap intel pgi 

Pitzer also use the MVAPICH2 implementation of the Message Passing Interface (MPI), optimized for the high-speed Infiniband interconnect. In addition, Pitzer support the Advanced Vector Extensions (AVX2) instruction set, but you must set the correct compiler flags to take advantage of it. 

See the Pitzer Programming Environment page for details.

PBS Batch-Related Command

qpeek Command is not needed on Pitzer.  

On Ruby, a job’s stdout and stderr data streams, which normally show up on the screen, are written to log files. These log files are stored on a server until the job ends, so you can’t look at them directly. The   qpeek  command allows you to peek at their contents. If you used the PBS header line to join the stdout and stderr streams ( #PBS -j oe ), the two streams are combined in the output log.

On Pitzer, a job’s stdout and stderr data streams are written to log files stored on the current working directory, i.e.$PBS_O_WORKDIR . You will see the log files immediately after your job get started. 

In addition, preemption job and hyper-threading job are supported on Pitzer. See this page for more information. 

Accounting

The Pitzer cluster will charged at a rate of 1 RU per 10 core-hours.

The Ruby cluster will be charged at a rate of 1 RU per 20 core-hours.

However, Pitzer will accept partial-node jobs and charge you for the number of cores proportional to the amount of memory your job requests. By contrast, Ruby only accepts full-node jobs and charge for the whole node. 

Below is a comparison of job limits between Pitzer and Ruby:

  Pitzer RUBY
Per User Up to 128 concurrently running jobs and/or up to 2040 processors/cores in use  Up to 40 concurrently running jobs and/or up to 800 processors/cores in use
Per group Up to 192 concurrently running jobs and/or up to 2040 processors/cores in use Up to 80 concurrently running jobs and/or up to 1600 processors/cores in use

Please see Queues and Reservations for Pitzer and Batch Limit Rules for more details.

 
Supercomputer: 
Service: