Pitzer early user information

Early Access Period

Pitzer is expected to be available to all OSC clients on a later date to be announced. A small number of projects will be given access during the preceding several weeks to help us with testing and to provide feedback. Early access is by application only; the application deadline has passed.

Early access will be granted in several stages beginning October 24, 2018. Applicants will receive notification of their access date via ServiceNow.

During the early access period there will be no charges for Pitzer jobs. Charging will begin when Pitzer is opened up for general access. The rate for Pitzer will be 1 RU per 10 core hours when it opens for general access.

System Instability – Warning

Please be aware that the system may go down with little or no warning during the early access period. If your work won’t tolerate this level of instability, we recommend that you use Owens or Ruby instead.

Connecting to Pitzer

Early access to Pitzer will be granted to all members of selected projects. Access is controlled by the linux secondary group “pitzerc.” If your project is selected for early access you will be added to this group.

ssh pitzer.osc.edu

Changes to qsub

The qsub syntax for node requests is the same on Pitzer as on Owens and Ruby.

Job Performance Reports

Note:  You should run performance reports on only a small number of moderate-size jobs. We have a limited number of licenses.

We are requesting that all early users on Pitzer provide OSC with performance reports for a sampling of their jobs. The reports are single-page html documents generated by Arm's perf-report tool. They provide high-level summaries that you can use to understand and improve the performance of your jobs.

OSC staff will review this information to help us understand the overall performance of the system. We will also provide assistance to individual users and projects to improve job efficiency.

ARM tools

Generating a performance report requires just a simple and minimally invasive modification to your job script. In all cases you must load the arm-pr module:

module load arm-pr
Applications started with mpiexec/mpirun

If you normally run your application as

mpiexec <mpi args> <program> <program args>

you should run it like this:

perf-report --np=<num procs> --mpiargs="<mpi args>" <program> <program args>

The  mpiargs  argument can be omitted if you aren't passing arguments to mpiexec. The  np  argument is required and is the total number of MPI processes to be started.

Serial and threaded applications

If your application does not use MPI, you should run it like this:

perf-report <program> <program args>
If it doesn't work -- special cases

1.  If your program is statically linked you may need to compile with extra flags to allow perf-report to work. Contact oschelp@osc.edu for assistance

2.  If you have an MPI program but you don't explicitly use mpiexec or mpirun, try this:

perf-report <program> <program args>

If it doesn't work, contact oschelp@osc.edu for assistance.

Retrieving your report

These commands will generate html and plain text files for the report, for example,   wavec_20p_2016-02-05_12-46.html  . You can open the report in html format using

firefox wavec_20p_2016-02-05_12-46.html

Note:  If your job runs in $TMPDIR you'll need to add a line to your script to copy the performance report back to your working directory. You can specify the name and/or location of the report files using the "--output=" option.

For more information, see:

https://www.osc.edu/resources/available_software/software_list/ARM/arm_performance_reports

https://developer.arm.com/products/software-development-tools/hpc/documentation

Intel tools

Generating a performance report requires just a simple and minimally invasive modification to your job script. In all cases you must load an intel module:

module load intel
Applications started with mpiexec/mpirun

If you normally run your application as

mpiexec <mpi args> <program> <program args>

you should run it like this:

mpiexec <mpi args> aps <program> <program args>
Serial and threaded applications

If your application does not use MPI, you should run it like this:

aps <program> <program args>
If it doesn't work -- special cases

1.  If your program is statically linked you may need to compile with extra flags to allow perf-report to work. Contact oschelp@osc.edu for assistance

2.  If you have an MPI program but you don't explicitly use mpiexec or mpirun, try this:

aps <program> <program args>

If it doesn't work, contact oschelp@osc.edu for assistance.

Generating your report

These commands will generate a directory of the form aps_result_YYYYMMDD/. Generate the report, the the directory that contains this subdirectory using

apps --report=./aps_result_YYYYMMDD

to create the file aps_report_YYYYMMDD_HHMMSS.html.

Retrieving your report

These commands will generate html and plain text files for the report, for example,   aps_report_20181121_104535.html  . You can open the report in html format using

firefox aps_report_20181121_104535.html

Note:  If your job runs in $TMPDIR you'll need to add a line to your script to copy the performance report back to your working directory. You can specify the name and/or location of the report files using the "--output=" option.

For more information, see:

https://software.intel.com/en-us/get-started-with-application-performance-snapshot

Getting Support

Please send your support requests to oschelp@osc.edu rather than to individual staff members. You can help us out by formatting the subject of your email as follows:

[pitzer][usrname] Informative description including software package if applicable

Include details of the problem with job IDs, complete error messages, and commands you executed leading up to the problem.

We know you’re going to discover problems that we didn’t encounter during our testing. We appreciate your patience as we work to fix them.

Hardware Availability

The following hardware will be available during the early access period:

248 Skylake nodes, each with 40 cores and 192GB memory

4 big memory Skylake nodes, each with 80 cores and 3TB memory

4 login nodes, Skylake, 40 cores each

Note:  32 nodes have 2 V100 GPUs.

Software Availability

The table below shows software that will be available at the start of the early access period. Installation of other software will be ongoing. The supercomputing software pages on the OSC website will be kept up to date as new software is installed.

If you find that software is missing or misconfigured, please report it as described above under "Getting Support."

 

Performance Tools

Intel VTune

First, make sure that you have an Intel module loaded.

Intel ITAC

First, make sure that you have an Intel module loaded.

ARM MAP

Begin by loading the MAP module

module load arm-map
Applications started with mpiexec/mpirun

If you normally run your application as

mpiexec <mpi args> <program> <program args>

you should run it like this:

map --profile --np=<num procs> --mpiargs="<mpi args>" <program> <program args>

The  mpiargs  argument can be omitted if you aren't passing arguments to mpiexec. The  np  argument is required and is the total number of MPI processes to be started.

Serial and threaded applications

If your application does not use MPI, you should run it like this:

map --profile --no-mpi <program> <program args>
If it doesn't work -- special cases

1.  If your program is statically linked you may need to compile with extra flags to allow perf-report to work. Contact oschelp@osc.edu for assistance

2.  If you have an MPI program but you don't explicitly use mpiexec or mpirun, try this:

map <program> <program args>

If it doesn't work, contact oschelp@osc.edu for assistance.

Retrieving your report

These commands will generate a map file, for example,   wavec_20p_2016-02-05_12-46.map  . You can open the report in html format using

map wavec_20p_2016-02-05_12-46.map

Note:  If your job runs in $TMPDIR you'll need to add a line to your script to copy the performance report back to your working directory. You can specify the name and/or location of the report files using the "--output=" option.

For more information, see:

https://www.osc.edu/resources/available_software/software_list/ARM/arm_map

https://developer.arm.com/products/software-development-tools/hpc/documentation

ARM DDT

Begin by loading the DDT module

module load arm-ddt
Applications started with mpiexec/mpirun

If you normally run your application as

mpiexec <mpi args> <program> <program args>

you should run it like this:

ddt --offline --np=<num procs> --mpiargs="<mpi args>" <program> <program args>

The  mpiargs  argument can be omitted if you aren't passing arguments to mpiexec. The  np  argument is required and is the total number of MPI processes to be started.

Serial and threaded applications

If your application does not use MPI, you should run it like this:

ddt --offline --no-mpi <program> <program args>
If it doesn't work -- special cases

1.  If your program is statically linked you may need to compile with extra flags to allow perf-report to work. Contact oschelp@osc.edu for assistance

2.  If you have an MPI program but you don't explicitly use mpiexec or mpirun, try this:

ddt <program> <program args>

If it doesn't work, contact oschelp@osc.edu for assistance.

Retrieving your report

These commands will generate html and plain text files for the report, for example,   wavec_20p_2016-02-05_12-46.html  . You can open the report in html format using

firefox wavec_20p_2016-02-05_12-46.html

Note:  If your job runs in $TMPDIR you'll need to add a line to your script to copy the performance report back to your working directory. You can specify the name and/or location of the report files using the "--output=" option.

For more information, see:

https://www.osc.edu/resources/available_software/software_list/ARM/arm_map

https://developer.arm.com/products/software-development-tools/hpc/documentation