Software List

Ohio Supercomputer Center (OSC) has a variety of software applications to support all aspects of scientific research. We are actively updating this documentation to ensure it matches the state of the supercomputers. This page is currently missing some content; use module spider on each system for a comprehensive list of available software.

Supercomputer: 
Service: 

Abaqus

ABAQUS is a finite element analysis program owned and supported by SIMULIA, the Dassault Systèmes brand for Realistic Simulation.

Availability and Restrictions

Versions

The available programs are ABAQUS/CAE, ABAQUS/Standard and ABAQUS/Explicit. The versions currently available at OSC are:

Version Owens Notes
6.14 X  
2016 X Versioning scheme was changed
2017 X  
2018 X  
2020 X*  
2021 X  
2022 X  
*: Default Version

You can use  module spider abaqus to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access for Academic Users

OSC's ABAQUS license can only be used for educational, institutional, instructional, and/or research purposes. Only users who are faculty, research staff, or students at the following institutions are permitted to utilized OSC's license:

  • The Ohio State University
  • University of Toledo
  • University of Cincinnati
  • University of Dayton
  • University of Akron
  • Miami University

Users from additional degree granting academic institutions may request to be added to this list per a cost by contacting OSC Help.

The use of ABAQUS for academic purposes requires validation. In order to obtain validation, please contact OSC Help for further instruction. 

(link sends e-mail)

Access for Commercial Users

Contact OSC Help for getting access to ABAQUS if you are a commercial user.

Publisher/Vendor/Repository and License Type

Dassault Systemes, Commercial

Usage

Token Usage

ABAQUS software usage is monitored though a token-based license manager. This means every time you run an ABAQUS job, tokens are checked out from our pool for your usage. To ensure your job is only started when its required ABAQUS tokens are available it is important to include a software flag within your job script's SBATCH directives.  A minimum of 5 tokens are required per job, so a 1 node, 1 processor ABAQUS job would need the following SBATCH software flag:  #SBATCH -L abaqus@osc:5 . Jobs requiring more cores will need to request more tokens as calculated with the formula:  M = int(5 x N^0.422) , where N is the total number of cores.  For common requests, you can refer to the following table:

         Cores            (nodes x cores each):

1 2 3 4 6 8 12 16 28 32 56
Tokens needed: 5 6 7 8 10 12 14 16 20 21 27

Usage on Owens

Set-up on Owens

To load the default version of ABAQUS, use  module load abaqus . To select a particular software version, use     module load abaqus/version . For example, use  module load abaqus/2022  to load ABAQUS version 2022 on Owens. 

Using ABAQUS

Example input data files are available with the ABAQUS release. The  abaqus fetch  utility is used to extract these input files for use. For example, to fetch input files for one of the sample problems including 4 input files, type:

abaqus fetch job=knee_bolster 

abaqus fetch job=knee_bolster_ef1 

abaqus fetch job=knee_bolster_ef2 

abaqus fetch job=knee_bolster_ef3 

Also, use the  abaqus help  utility to list all the abaqus execution procedures.

Batch Usage on Owens

When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your ABAQUS analysis to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

Interactive Batch Session
For an interactive batch session on Owens, one can run the following command:
sinteractive -A <project-account> -N 1 -n 28 -t 1:00:00 -L abaqus@osc:20
which gives you 28 cores ( -N 1 -n 28 ) for 1 hour ( -t 1:00:00 ). You may adjust the numbers per your need.
Non-interactive Batch Job (Serial Run)

batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice.

Below is the example batch script ( job.txt ) for a serial run:

#!/bin/bash
#SBATCH --time=1:00:00
#SBATCH --nodes=1 --ntasks-per-node=1
#SBATCH -L abaqus@osc:5
#SBATCH --account=<project-account>
#
# The following lines set up the ABAQUS environment
#
module load abaqus
#
cp *.inp $TMPDIR
cd $TMPDIR
#
# Run ABAQUS
#
abaqus job=knee_bolster interactive
#
# Now, copy data (or move) back once the simulation has completed
#
cp * $SLURM_SUBMIT_DIR

In order to run it via the batch system, submit the  job.txt  file with the command:  qsub job.txt 

NOTE:

  • Make sure to copy all the files needed (input files, restart files, user subroutines, python scripts etc.) from your work directory ( $SLURM_SUBMIT_DIR ) to  $TMPDIR , and copy your results back at the end of your script. Running your job on  $TMPDIR  ensures maximum efficiency.
  • The keyword  interactive  is required in the execution line  abaqus job=knee_bolster interactive  for the following reason: If left off, ABAQUS will background the simulation process. Backgrounding a process in the OSC environment will place it outside of the batch job and it will receive the default 1 hour of CPU time and corresponding default memory limits. The keyword  interactive  in this case simply tells ABAQUS not to return until the simulation has completed.
  • The name of the input file is sometimes omitted in the execution line, which may work fine if you've copied only the input files for one specific model. However, it is better practice to designate the main input file explicitly by adding  input=<my_input_file_name>.inp  to the execution line:  abaqus job=knee_bolster input=<my_input_file_name>.inp interactive .
  • Define  nodes=1  (1<=cores<=28 for Owens) for a serial run.
  • If cores > 1, add  cpus=<n>  to the execution line, where n=cores:  abaqus job=test input=<my_input_file_name1>.inp cpus=<n> interactive .
Non-interactive Batch Job (Parallel Run)
Note: abaqus will not run correctly in parallel with input files in $TMPDIR!  Use the scratch file system.

Below is an example batch script ( job.txt ) for a parallel run:

#!/bin/bash 
#SBATCH --time=1:00:00 
#SBATCH --nodes=2 --ntasks-per-node=28 --gres=pfsdir
#SBATCH -L abaqus@osc:27
#SBATCH --account=<project-account>
#
# The following lines set up the ABAQUS environment
#
module load abaqus
#
# Cope input files to /fs/scratch and run Abaqus there
#
cp *.inp $PFSDIR
cd $PFSDIR
#
# Run ABAQUS, note that in this case we have provided the names of the input files explicitly
#
abaqus job=test input=<my_input_file_name1>.inp cpus=$SLURM_NTASKS interactive
#
# Now, move data back once the simulation has completed
#
mv * $SLURM_SUBMIT_DIR

NOTE:

  • If you request a partial node for a serial job (cores<28), you need to add 'mp_mode=threads' option in order to get the full performance.  
  • Specify  cpus=<n>  in the execution line, where n=nodes*cores.
  • Everything else is similar to the serial script above.
  • Usage of a user-defined material (UMAT) script in Fortran is limited on Owens as follows:
    1. abaqus 2017:  correctly running on single and multi-nodes
    2. abaqus 6.14 and 2016:  correctly running on a single node.

Further Reading

 

Supercomputer: 
Service: 
Fields of Science: 

AFNI

AFNI (Analysis of Functional Neuro Images) is a leading software suite of C, Python, and R programs and shell scripst primarily developed for the analysis and display of multiple MRI modalities: anatomical, functional MRI (FMRI) and diffusion wieghted (DW) data. It is freely available (both as open source code and as precompiled binaries) for research purposes.

Availability and Restrictions

Versions

The following versions are available on OSC clusters:

VERSION

Owens Pitzer
2021.6.10 X* X*
* Current default version

You can use module spider afni to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access

AFNI is available to all OSC users. If you have any questions, please contact OSC Help.

Publisher/Vendor/Repository and License Type

AFNI is distributed freely under the Gnu General Public License. Major portions of this software were written at the Medical College of Wisconsin, which owns the copyright to that code. For fuller details, see the file http://afni.nimh.nih.gov/pub/dist/src/README.copyright.

Usage

Usage on Pitzer

Set-up

To configure your environment for use of AFNI, run the following command: module load afni. The default version will be loaded. To select a particular AFNI version, use module load afni/version. For example, use module load afni/2021.6.10 to load AFNI 2021.6.10.

AFNI is installed in a singularity container.  AFNI_IMG environment variable contains the container image file path. So, an example usage would be

module load afni
singularity exec $AFNI_IMG suma

This command will open the SUMA GUI environment, and we recommend Ondemand VDI or Desktop for GUI. 

For more information about singularity usages, please read OSC singularity page

Usage on Owens

Set-up

To configure your environment for use of AFNI, run the following command: module load afni. The default version will be loaded. To select a particular AFNI version, use module load afni/version. For example, use module load afni/2021.6.10 to load AFNI 2021.6.10.

AFNI is installed in a singularity container.  AFNI_IMG environment variable contains the container image file path. So, an example usage would be

module load afni
singularity exec $AFNI_IMG suma

This command will open the SUMA GUI environment, and we recommend Ondemand VDI or Desktop for GUI. 

For more information about singularity usages, please read OSC singularity page

Further Reading

Supercomputer: 
Service: 
Fields of Science: 

AMBER

The Assisted Model Building with Energy Refinement (AMBER) package, which includes AmberTools, contains many molecular simulation programs targeted at biomolecular systems. A wide variety of modelling techniques are available. It generally scales well on modest numbers of processors, and the GPU enabled CUDA programs are very efficient.

Availability and Restrictions

Versions

AMBER is available on the Owens, Pitzer, and Ascend clusters. The following versions are currently available at OSC (S means serial executables, P means parallel, and C means CUDA, i.e., GPU enabled):

Version Owens Pitzer Ascend Notes
18 SPC SPC    
19 SPC* SPC*    
20 SPC SPC SPC  
22 SPC SPC SPC  
* Current default version
*  IMPORTANT NOTE: You need to load correct compiler and MPI modules before you use Amber. In order to find out what modules you need, use module spider amber/{version} .

You can use module spider amber to view available modules and use module spider amber/{version} to view installation details including applied Amber updates. Feel free to contact OSC Help if you need other versions or executables for your work.

Access for Academic Users

OSC's Amber is available to not-for-profit OSC users; simply contact OSC Help to request the appropriate form for access.

Access for Commercial Users

For-profit OSC users must obtain their own Amber license. 

Publisher/Vendor/Repository and License Type

University of California, San Francisco, Commercial

Usage

Usage on Owens

Set-up

To load the default version of AMBER module, use  module load amber . To select a particular software version, use  module load amber/version . For example, use  module load amber/16  to load AMBER version 16. 

Using AMBER

A serial Amber program in a short duration run can be executed interactively on the command line, e.g.:

tleap

Parallel Amber programs must be run in a batch environment with  srun, e.g.:

srun pmemd.MPI

 

Batch Usage

When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your AMBER simulation to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

Interactive Batch Session
For an interactive batch session, one can run the following command:
sinteractive -A <project-account> -N 1 -n 28 -t 1:00:00
which gives you one node with 28 cores ( -N 1 -n 28 ), with 1 hour ( -t 1:00:00 ). You may adjust the numbers per your need.
Non-interactive Batch Job (Serial Run)

batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Sample batch scripts and Amber input files are available here:

~srb/workshops/compchem/amber/

Below is the example batch script ( job.txt ) for a serial run:

# AMBER Example Batch Script for the Basic Tutorial in the Amber manual
#!/bin/bash
#SBATCH --job-name 6pti
#SBATCH --nodes=1 --ntasks-per-node=28
#SBATCH --time=0:20:00
#SBATCH --account=<project-account>

module load amber
# Use TMPDIR for best performance.
cd $TMPDIR
# SLURM_SUBMIT_DIR refers to the directory from which the job was submitted.
cp -p $SLURM_SUBMIT_DIR/6pti.prmtop .
cp -p $SLURM_SUBMIT_DIR/6pti.prmcrd .
# Running minimization for BPTI
cat << eof > min.in
# 200 steps of minimization, generalized Born solvent model
&cntrl
maxcyc=200, imin=1, cut=12.0, igb=1, ntb=0, ntpr=10,
/
eof
sander -i min.in -o 6pti.min1.out -p 6pti.prmtop -c 6pti.prmcrd -r 6pti.min1.xyz
cp -p min.in 6pti.min1.out 6pti.min1.xyz $SLURM_SUBMIT_DIR

In order to run it via the batch system, submit the  job.txt  file with the command:  sbatch job.txt

Usage on Pitzer

Set-up

To load the default version of AMBER module, use  module load amber

Using AMBER

A serial Amber program in a short duration run can be executed interactively on the command line, e.g.:

tleap

Parallel Amber programs must be run in a batch environment with mpiexec, e.g.:

srun pmemd.MPI

 

Batch Usage

When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your AMBER simulation to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

Interactive Batch Session
For an interactive batch session, one can run the following command:
sinteractive -A <project-account> -N 1 -n 48 -t 1:00:00
which gives you one node with 48 cores ( -N 1 -n 48) with 1 hour ( -t 1:00:00). You may adjust the numbers per your need.
Non-interactive Batch Job (Serial Run)

batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Sample batch scripts and Amber input files are available here:

~srb/workshops/compchem/amber/

Below is the example batch script ( job.txt ) for a serial run:

# AMBER Example Batch Script for the Basic Tutorial in the Amber manual 
#!/bin/bash
#SBATCH --job-name 6pti #
SBATCH --nodes=1 --ntasks-per-node=48 
SBATCH --time=0:20:00
#SBATCH --account=<project-account>

module load amber
# Use TMPDIR for best performance.
cd $TMPDIR
# SLURM_SUBMIT_DIR refers to the directory from which the job was submitted.
cp -p $SLURM_SUBMIT_DIR/6pti.prmtop .
cp -p $SLURM_SUBMIT_DIR/6pti.prmcrd .
# Running minimization for BPTI
cat << eof > min.in
# 200 steps of minimization, generalized Born solvent model
&cntrl
maxcyc=200, imin=1, cut=12.0, igb=1, ntb=0, ntpr=10,
/
eof
sander -i min.in -o 6pti.min1.out -p 6pti.prmtop -c 6pti.prmcrd -r 6pti.min1.xyz
cp -p min.in 6pti.min1.out 6pti.min1.xyz $SLURM_SUBMIT_DIR

In order to run it via the batch system, submit the  job.txt  file with the command:  sbatch job.txt .

Troubleshooting

In general, the scientific method should be applied to usage problems.  Users should check all inputs and examine all outputs for the first signs of trouble.  When one cannot find issues with ones inputs, it is often helpful to ask fellow humans, especially labmates, to review the inputs and outputs.  Reproducibility of molecular dynamics simulations is subject to many caveats.  See page 24 of the Amber18 manual for a discussion.

Further Reading

Supercomputer: 
Service: 

ANSYS

ANSYS offers a comprehensive software suite that spans the entire range of physics, providing access to virtually any field of engineering simulation that a design process requires. Supports are provided by ANSYS, Inc

Availability and Restrictions

Versions

Version Owens
17.2 X
18.1

X

19.1 X
19.2 X
2019R1 X
2019R2 X
2020R1 X*
2020R2 X
2021R1 X
2021R2 X
2022R1 X
2022R2 X
2023R2 X
* Current default version

OSC has Academic Multiphysics Campus Solution license from Ansys. The license includes most of all the features that Ansys provides. See "Academic Multiphysics Campus Solution Products" in this table for all available products at OSC.

ANSYS only works with versions 2021R1 or newer due to the license upgrade. We are working with the vendor to fix the issue now. 

Access for Academic Users

OSC has an "Academic Research " license for ANSYS. This allows for academic use of the software by Ohio faculty and students, with some restrictions. To view current ANSYS node restrictions, please see ANSYS's Terms of Use.

Use of ANSYS products at OSC for academic purposes requires validation. Please contact OSC Help for further instruction.

Access for Commercial Users

Contact OSC Help for getting access to ANSYS if you are a commercial user.

Publisher/Vendor/Repository and License Type

Ansys, Inc., Commercial

Usage

For more information on how to use each ANSYS product at OSC systems, refer to its documentation page provided at the end of this page.

Known Issues

Simultaneously loading multiple of Fluent and ANSYS module cryptic error

Due to the way our Fluent and ANSYS modules are configured, simultaneously loading multiple of either module will cause a cryptic error. The most common case of this happening is when multiple of a user's jobs are started at the same time and all load the module at once. In order for this error to manifest, the modules have to be loaded at precisely the same time; a rare occurrence, but a probable occurrence over the long term.

If you encounter this error you are not at fault. Please resubmit the failed job(s).

If you frequently submit large amounts of Fluent or ANSYS jobs, we recommend you stagger your job submit times to lower the chances of two jobs starting at the same time, and hence loading the module at the same time. Another solution is to establish job dependencies between jobs, so jobs will only start one after another. To do this, you would add the SLURM directive:

#SBATCH --dependency=after:jobid

To jobs you want to only start after another job has started. You would replace jobid with the job ID of the job to wait for. If you have additional questions, please contact OSC Help.

Ansys DesignModeler with hardware acceleration

Updated: April 2022
Versions Affected:  < 19.1
Ansys DesignModeler with hardware acceleration is not working. With Ansys version greater than 19.1, DesignModeler is working with software rendering mode, but it is very slow.

 

Further Reading

See Also

Supercomputer: 
Service: 

ANSYS Mechanical

ANSYS Mechanical is a finite element analysis (FEA) tool that enables you to analyze complex product architectures and solve difficult mechanical problems. You can use ANSYS Mechanical to simulate real world behavior of components and sub-systems, and customize it to test design variations quickly and accurately.

Availability and Restrictions

ANSYS Mechanical is available on the Owens Cluster. You can see the currently available versions in the table on the main Ansys page here.

You can use module spider ansys to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access for Academic Users

Use of ANSYS for academic purposes requires validation. In order to obtain validation, please contact OSC Help for further instruction.

Access for Commercial Users

Contact OSC Help for getting access to ANSYS if you are a commercial user.

Usage

Usage on Owens

Set-up on Owens

To load the default version of ANSYS module, use  module load ansys . To select a particular software version, use   module load ansys/version . For example, use  module load ansys/17.2   to load ANSYS version 17.2 on Owens. 

Using ANSYS Mechanical

Following a successful loading of the ANSYS module, you can access the ANSYS Mechanical commands and utility programs located in your execution path:

ansys <switch options> <file>

The ANSYS Mechanical command takes a number of Unix-style switches and parameters.

The -j Switch

The command accepts a -j switch. It specifies the "job id," which determines the naming of output files. The default is the name of the input file.

The -d Switch

The command accepts a -d switch. It specifies the device type. The value can be X11, x11, X11C, x11c, or 3D.

The -m Switch

The command accepts a -m switch. It specifies the amount of working storage obtained from the system. The units are megawords.

The memory requirement for the entire execution will be approximately 5300000 words more than the -m specification. This is calculated for you if you use ansnqs to construct an NQS request.

The -b [nolist] Switch

The command accepts a -b switch. It specifies that no user input is expected (batch execution).

The -s [noread] Switch

The command accepts a -s switch. By default, the start-up file is read during an interactive session and not read during batch execution. These defaults may be changed with the -s command line argument. The noread option of the -s argument specifies that the start-up file is not to be read, even during an interactive session. Conversely, the -s argument with the -b batch argument forces the reading of the start-up file during batch execution.

The -g [off] Switch

The command accepts a -g switch. It specifies that the ANSYS graphical user interface started automatically.

ANSYS Mechanical parameters

ANSYS Mechanical parameters may be assigned values on the command. The parameter must be at least two characters long and must be a legal parameter name. The ANSYS Mechanical parameter that is to be assigned a value should be given on the command line with a preceding dash (-), a space immediately after, and the value immediately after the space:

module load ansys
ansys -pval1 -10.2 -EEE .1e6
sets pval1 to -10.2 and EEE to 100000

Batch Usage on Owens

When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your ANSYS Mechanical analysis to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.  Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.

Interactive Batch Session

Interactive mode is similar to running ANSYS Mechanical on a desktop machine in that the graphical user interface will be sent from OSC and displayed on the local machine. Interactive jobs are run on compute nodes of the cluster, by turning on X11 forwarding. The intention is that users can run ANSYS Mechanical interactively for the purpose of building their model and preparing their input file. Once developed this input file can then be run in no-interactive batch mode.

To run interactive ANSYS Mechanical, a batch job need to be submitted from the login node, to request necessary compute resources, with X11 forwarding. For example, the following command requests one whole node with 28 cores ( -N 1 -n 28  ), for a walltime of 1 hour (  -t 1:00:00 ), with one ANSYS license:

sinteractive -N 1 -n 28 -t 1:00:00 -L ansys@osc:1,ansyspar@osc:24 -A <account>

You may adjust the numbers per your need. This job will queue until resources becomes available. Once the job is started, you're automatically logged in on the compute node; and you can launch ANSYS Mechanical and start the graphic interface with the following commands:

module load ansys
ansys -g
Non-interactive Batch Job (Serial Run)

A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. For a given model, prepare the input file with ANSYS Mechanical commands (named  ansys.in  for example) for the batch run. Below is the example batch script (   job.txt ) for a serial run:

#!/bin/bash
#SBATCH --job-name=ansys_test
#SBATCH --time=1:00:00
#SBATCH --nodes=1 --ntasks-per-node=1
#SBATCH -L ansys@osc:1
#SBATCH --account=<account>

cd $TMPDIR  
cp $SLURM_SUBMIT_DIR/ansys.in .    
module load ansys  
ansys < ansys.in   
cp <output files> $SLURM_SUBMIT_DIR

In order to run it via the batch system, submit the  job.txt  file with the command:  qsub job.txt .

Non-interactive Batch Job (Parallel Run)

To take advantage of the powerful compute resources at OSC, you may choose to run distributed ANSYS Mechanical for large problems. Multiple nodes and cores can be requested to accelerate the solution time. Note that you'll need to change your batch script slightly for distributed runs.

Starting from September 15, 2015, a job using HPC tokens (with "ansyspar" flag) should be submitted to Owens clusters due to scheduler issue.

For distributed ANSYS Mechanical jobs, the number of processors needs to be specified in the command line with options '-dis -np':

#!/bin/bash
#SBATCH --job-name=ansys_test
#SBATCH --time=1:00:00
#SBATCH --nodes=1 --ntasks-per-node=28
#SBATCH --account=<account>
#SBATCH -L ansys@osc:1,ansyspar@osc:24

...
ansys -b -dis -mpi ibmmpi -np ${SLURM_NTASKS} -i ansys.in
 
...

Notice that in the script above, the ansys parallel license is requested as well as ansys license in the format of

#SBATCH -L ansys@osc:1,ansyspar@osc:n

where n=m-4, with m being the total cpus called for this job. This part is necessary when the total cpus called is greater than 4 (m>4), which applies for the parallel example below as well.

The following shows changes in the batch script if 2 nodes on Owens are requested for a parallel ANSYS Mechanical job:

#!/bin/bash
#SBATCH --job-name=ansys_test
#SBATCH --time=3:00:00
#SBATCH --nodes=2 --ntasks-per-node=28
#SBATCH -L ansys@osc:1,ansyspar@osc:52

...
ansys -b -dis -mpi ibmmpi -np ${SLURM_NTASKS} -i ansys.in 
...
pbsdcp -g '<output files>' $SLURM_SUBMIT_DIR

The 'pbsdcp -g' command in the last line in the script above makes sure that all result files generated by different compute nodes are copied back to the work directory.

Further Reading

See Also

Supercomputer: 
Service: 

CFX

ANSYS CFX (called CFX hereafter) is a computational fluid dynamics (CFD) program for modeling fluid flow and heat transfer in a variety of applications.

Availability and Restrictions

CFX is available on the Owens Cluster. You can see the currently available versions in the table on the main Ansys page here.

You can use module spider ansys  to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access for Academic Users

Use of ANSYS products for academic purposes requires validation. In order to obtain validation, please contact OSC Help for further instruction.

Currently, there are in total 50 ANSYS base license tokens and 900 HPC tokens for academic users. These base tokens and HPC tokens are shared with all ANSYS products we have at OSC. A base license token will allow CFX to use up to 4 cores without any additional tokens. If you want to use more than 4 cores, you will need an additional "HPC" token per core. For instance, a serial CFX job with 1 core will need 1 base license token while a parallel CFX job with 28 cores will need 1 base license token and 24 HPC tokens.

Access for Commercial Users

Contact OSC Help for getting access to CFX if you are a commercial user.

Usage

Usage on Owens

Set-up on Owens

To load the default version, use  module load ansys  . To select a particular software version, use   module load ansys/version . For example, use  module load ansys/17.2   to load CFX version 17.2 on Owens. 

Batch Usage on owens

When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your analysis to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.  Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.

Interactive Batch Session

Interactive mode is similar to running CFX on a desktop machine in that the graphical user interface will be sent from OSC and displayed on the local machine. Interactive jobs are run on compute nodes of the cluster, by turning on X11 forwarding. The intention is that users can run CFX interactively for the purpose of building their model and preparing their input file. Once developed this input file can then be run in no-interactive batch mode.

To run interactive CFX GUI, a batch job need to be submitted from the login node, to request necessary compute resources, with X11 forwarding. Please follwoing the steps below to use CFX GUI interactivly:

  1. Ensure that your SSH client software has X11 forwarding enabled
  2. Connect to Owens system
  3. Request an interactive job. The command below will request one whole node with 28 cores (  -N 1 -n 28  ), for a walltime of one hour ( -t 1:00:00 ), with one ANSYS CFD license (modify as per your own needs):
    sinteractive -N 1 -n 28 -t 1:00:00 -L ansys@osc:1,ansyspar@osc:24
  4. Once the interactive job has started, run the following commands to setup and start the CFX GUI:

    module load ansys
    cfx5 
    
Non-interactive Batch Job (Serial Run Using 1 Base Token)

A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice.

Below is the example batch script (  job.txt ) for a serial run with an input file test.def ) :

#!/bin/bash
#SBATCH --job-name=serialjob_cfx
#SBATCH --time=1:00:00
#SBATCH --nodes=1 --ntasks-per-node=1
#SBATCH -L ansys@osc:1

#Set up CFX environment.
module load ansys
#Copy CFX files like .def to $TMPDIR and move there to execute the program
cp test.def $TMPDIR/
cd $TMPDIR
#Run CFX in serial with test.def as input file
cfx5solve -batch -def test.def 
#Finally, copy files back to your home directory
cp  * $SLURM_SUBMIT_DIR

In order to run it via the batch system, submit the job.txt  file with the command: sbatch job.txt  

Non-interactive Batch Job (Parallel Execution using HPC token)

CFX can be run in parallel, but it is very important that you read the documentation in the CFX Manual on the details of how this works.

In addition to requesting the base license token ( -L ansys@osc:1 ), you need to request copies of the ansyspar license, i.e., HPC tokens ( -L ansys@osc:1,ansyspar@osc:[n] ), where [n] is equal to the number of cores you requested minus 4.

Parallel jobs have to be submitted on Owens via the batch system. An example of the batch script follows:

#!/bin/bash
#SBATCH --job-name=paralleljob_cfx
#SBATCH --time=10:00:00
#SBATCH --nodes=2 --ntasks-per-node=28
#SBATCH -L ansys@osc:1,ansyspar@osc:52

#Set up CFX environment.
module load ansys
#Copy CFX files like .def to $TMPDIR and move there to execute the program
cp test.def $TMPDIR/
cd $TMPDIR
#Convert the node information into format for CFX host list
nodes=$(srun hostname | sort | \
uniq -c | \
awk '{print $2 "*" $1}' | \
paste -sd, -)
#Run CFX in parallel with new.def as input file
#if multiple nodes
cfx5solve -batch -def test.def  -par-dist $nodes -start-method "Platform MPI Distributed Parallel"
#if one node
#cfx5solve -batch -def test.def -par-dist $nodes -start-method "Platform MPI Local Parallel"
#Finally, copy files back to your home directory
cp  * $SLURM_SUBMIT_DIR

Further Reading

Supercomputer: 
Service: 

FLUENT

ANSYS FLUENT (called FLUENT hereafter) is a state-of-the-art computer program for modeling fluid flow and heat transfer in complex geometries.

Availability and Restrictions

FLUENT is available on the Owens Cluster. You can see the currently available versions in the table on the main Ansys page here.

You can use module spider ansys for Owens to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access for Academic Users

Use of ANSYS products for academic purposes requires validation. In order to obtain validation, please contact OSC Help for further instruction.

Currently, there are in total 50 ANSYS base license tokens and 900 HPC tokens for academic users. These base tokens and HPC tokens are shared with all ANSYS products we have at OSC.  A base license token will allow FLUENT to use up to 4 cores without any additional tokens. If you want to use more than 4 cores, you will need an additional "HPC" token per core. For instance, a serial FLUENT job with 1 core will need 1 base license token while a parallel FLUENT job with 28 cores will need 1 base license token and 24 HPC tokens.

Access for Commercial Users

Contact OSC Help for getting access to FLUENT if you are a commercial user.

Usage

Usage on Owens

Set-up on Owens

To load the default version of FLUENT module, use  module load ansys. To select a particular software version, use module load ansys/version. For example, use  module load ansys/17.2  to load FLUENT version 17.2 on Owens. 

Batch Usage on Owens

When you log into owens.osc.edu you are actually logged into a Linux box referred to as the login node. To gain access to the multiple processors in the computing environment, you must submit your FLUENT analysis to the batch system for execution. Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.  Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.

Interactive Batch Session

Interactive mode is similar to running FLUENT on a desktop machine in that the graphical user interface will be sent from OSC and displayed on the local machine. Interactive jobs are run on compute nodes of the cluster, by turning on X11 forwarding. The intention is that users can run FLUENT interactively for the purpose of building their model and preparing their input file. Once developed this input file can then be run in non-interactive batch mode.

To run interactive FLUENT GUI, a batch job need to be submitted from the login node, to request necessary compute resources, with X11 forwarding. Please following the steps below to use FLUENT GUI interactively:

  1. Ensure that your SSH client software has X11 forwarding enabled
  2. Connect to Owens system
  3. Request an interactive job. The command below will request one whole node with 28 cores ( -N 1 -n 28), for a walltime of one hour (-t 1:00:00), with one FLUENT license (modify as per your own needs):
    sinteractive -N 1 -n 28 -t 1:00:00 -L ansys@osc:1,ansyspar@osc:24
  4. Once the interactive job has started, run the following commands to setup and start the FLUENT GUI:

    module load ansys
    fluent 
    
Non-interactive Batch Job (Serial Run Using 1 Base Token)

A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice.

Below is the example batch script ( job.txt) for a serial run with an input file (run.input) on Owens:

#!/bin/bash
#SBATCH --job-name=serial_fluent
#SBATCH --time=1:00:00
#SBATCH --nodes=1 --ntasks-per-node=1
#SBATCH -L ansys@osc:1
#
# The following lines set up the FLUENT environment
#
module load ansys
#
# Copy files to $TMPDIR and move there to execute the program
#
cp test_input_file.cas test_input_file.dat run.input $TMPDIR
cd $TMPDIR
#
# Run fluent
fluent 3d -g < run.input  
#
# Where the file 'run.input' contains the commands you would normally
# type in at the Fluent command prompt.
# Finally, copy files back to your home directory
cp *   $SLURM_SUBMIT_DIR 

As an example, your run.input file might contain:

file/read-case-data test_input_file.cas 
solve/iterate 100
file/write-case-data test_result.cas
file/confirm-overwrite yes    
exit  
yes  

In order to run it via the batch system, submit the job.txt file with the command: sbatch job.txt 

Non-interactive Batch Job (Parallel Execution using HPC token)

FLUENT can be run in parallel, but it is very important that you read the documentation in the FLUENT Manual on the details of how this works.

In addition to requesting the FLUENT base license token (-L ansys@osc:1), you need to request copies of the ansyspar license, i.e., HPC tokens (-L ansys@osc:1,ansyspar@osc:[n]), where [n] is equal to the number of cores you requested minus 4.

Parallel jobs have to be submitted to Owens via the batch system. An example of the batch script follows:

#!/bin/bash
#SBATCH --job-name=parallel_fluent
#SBATCH --time=3:00:00
#SBATCH --nodes=2 --ntasks-per-node=28
#SBATCH -L ansys@osc:1,ansyspar@osc:52
set echo on   
hostname   
#   
# The following lines set up the FLUENT environment   
#   
module load ansys
#      
# Create the config file for socket communication library   
#   
# Create list of nodes to launch job on   
rm -f pnodes   
cat  $PBS_NODEFILE | sort > pnodes   
export ncpus=`cat pnodes | wc -l`   
#   
#   Run fluent   
fluent 3d -t$ncpus -pinfiniband.ofed -cnf=pnodes -g < run.input 

Known Issues

Parallel job hang and startup failed

Resolution: Resolved with workaround
Update: April 2024
Version: All

FLUENT parallel jobs with default MPI (Intel MPI) may experience startup failures, leading to job hang due to a recent Slurm upgrade. Intel MPI in FLUENT uses SSH as the default bootstrap mechanism to launch the Hydra process manager. Starting with Slurm version 23.11, the environment variable I_MPI_HYDRA_BOOTSTRAP_EXEC_EXTRA_ARGS=--external-launcher is added because Slurm is set as the default bootstrap system (I_MPI_HYDRA_BOOTSTRAP=slurm). However, this causes an issue when SSH is utilized as the bootstrap system.

Workaround

Add export -n I_MPI_HYDRA_BOOTSTRAP_EXEC_EXTRA_ARGS before executing the fluent command.

Reference

Further Reading

See Also

Supercomputer: 
Service: 

Workbench Platform

ANSYS Workbench platform is the backbone for delivering a comprehensive and integrated simulation system to users. See ANSYS Workbench platform for more information. 

Availability and Restrictions

ANSYS Workbench is available on Owens Cluster. You can see the currently available versions in the table on the main Ansys page here.

You can use module spider ansys  to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access for Academic Users

Use of ANSYS products for academic purposes requires validation. In order to obtain validation, please contact OSC Help for further instruction.

Access for Commercial Users

Contact OSC Help for getting access to ANSYS if you are a commercial user.

Usage

Usage on Owens

Set-up for Structural-Fluid dynamics related applications

To load the default version , use  module load ansys . To select a particular software version, use   module load ansys/version . For example, use  module load ansys/17.2   to load version 17.2 on Owens. After the module is loaded, use the following command to open Workbench GUI:

runwb2

Set-up for CFD related applications

To load the default version , use  module load ansys  . To select a particular software version, use   module load ansys/version   . For example, use  module load ansys/17.2   to load version 17.2 on Owens. After the module is loaded, use the following command to open Workbench GUI:

runwb2

Further Reading

See Also

Supercomputer: 
Service: 

ARM HPC tools

ARM HPC tools analyze how HPC software runs. It consists of three applications, ARM DDT, ARM Performance Reports and ARM MAP: 

  • ARM DDT: graphical debugger for HPC applications.
  • ARM MAP: HPC application profiler with easy-to-use GUI environment.
  • ARM Performance Reports: simple tool to generate a single-page HTML or plain text report that presents overall performance characteristics of HPC applications.

 

NOTE: Because ARM has aquired Allinea, all Allinea module files have been renamed accordingly. Allinea modules are still available and have same functionality as new ARM modules.
NOTE [June 29, 2022]: As ARM reported security vulnerabilities on the old ARM Forge versions prior to 22.0.x, we have removed the old versions and installed 22.0.2 version.

Availability & Restrictions

Versions

The following versions of ARM HPC tools are available on OSC clusters:

Version Owens Pitzer
22.0.2 X* X*
* Current default version

You can use module spider arm to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access

ARM DDT, MAP and Performance Reports are available to all OSC users.

Publisher/Vendor/Repository and License Type

ARM, Commercial

Usage

ARM DDT

ARM DDT is a debugger for HPC software that automatically alerts users of memory bugs and divergent behavior. For more features and benefits, visit ARM HPC tools and libraries - DDT.

For usage instructions and more iformation, read ARM DDT.

ARM MAP

ARM MAP produces a detailed profile of HPC software. Unlike ARM Performance Reports, you must have the source code to run ARM MAP because its analysis details the software line-by-line. For more features and benefits, visit ARM HPC tools and libraries - MAP

For usage instructions and more information, read ARM MAP.

ARM Performance Reports

ARM Performance Reports analyzes and documents information on CPU, MPI, I/O, and Memory performance characteristics of HPC software, even third party code, to aid understanding about the overall performance. Although it should not be used all the time, ARM Performance Reports is recommended to OSC users as a viable option to analyze how an HPC application runs. View an example report to navigate the format of a typical report. For more example reports, features and benefits, visit ARM HPC tools and libraries - Performance Reports.

For usage instructions and more information, read ARM Performance Reports.

Troubleshooting

Using ARM software with MVAPICH2

This note from ARM's Getting Started Guide applies to both perf-report and MAP:

Some MPIs, most notably MVAPICH, are not yet supported by ARM's Express Launch mode
(in which you can just put “perf-report” in front of an existing mpirun/mpiexec line). These can
still be measured using the Compatibility Launch mode.

Instead of this Express Launch command:

perf-report mpiexec <mpi args> <program> <program args> # BAD

Use the compatibility launch version instead:

perf-report -n <num procs> --mpiargs="<mpi args>" <program> <program args>

Further Reading

See Also

Documentation Attachment: 
Supercomputer: 
Service: 
Fields of Science: 

ARM Performance Reports

ARM Performance Reports is a simple tool used to generate a single-page HTML or plain text report that presents the overall performance characteristics of HPC applications. It supports pthreads, OpenMP, or MPI code on CPU, GPU, and MIC based architectures.

Availability and Restrictions

Versions

The versions currently available at OSC are:

Version Owens Pitzer
22.0.2 X* X*
* Current default version

You can use module spider arm-pr to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access

ARM Performance Reports is available to all OSC users. We have 64 seats with 64 HPC tokens. Users can monitor the license status here.

Publisher/Vendor and License Type

ARM, Commercial

Usage

Set-up

To load the module for the ARM Performance Reports default version, use module load arm-pr. To select a particular software version, use module load arm-pr/version. For example, use module load arm-pr/6.0 to load ARM Performance Reports version 6.0, provided the version is available on the OSC cluster in use.

Using ARM Performance Reports

You can use your regular executables to generate performance reports. The program can be used to analyze third-party code as well as code you develop yourself. Performance reports are normally generated in a batch job.

To generate a performance report for an MPI program:

module load arm-pr
perf-report -np <num procs> --mpiargs="<mpi args>" <program> <program args>

where <num procs> is the number of MPI processes to use, <mpi args> represents arguments to be passed to mpiexec (other than -n or -np), <program> is the executable to be run and <program args> represents arguments passed to your program.

For example, if you normally run your program with mpiexec -n 12 wave_c, you would use

perf-report -np 12 wave_c

To generate a performance report for a non-MPI program:

module load arm-pr
perf-report --no-mpi <program> <program args>

The performance report is created in both html and plain text formats. The file names are based on the executable name, number of processes, date and time, for example,  wave_c_12p_2016-02-05_12-46.html. To open the report in html format use

firefox wave_c_12p_2016-02-05_12-46.html

For more details, download the ARM Performance Reports User Guide.

Performance Reports with GPU

ARM Performance Reports can be used for CUDA codes. If you have an executable compiled with the CUDA library, you can launch ARM Performance Reports with

perf-report {executable}

For more information, please read the section 6.10 of the ARM Performance Reports User Guide.

Further Reading

See Also

Documentation Attachment: 
Supercomputer: 
Service: 

ARM MAP

ARM MAP is a full scale profiler for HPC programs. We recommend using ARM MAP after reviewing reports from ARM Performance Reports. MAP supports pthreads, OpenMP, and MPI software on CPU, GPU, and MIC based architectures.

Availability & Restrictions

Versions

The ARM MAP versions currently available at OSC are:

Version Owens Pitzer
22.0.2 X* X*
* Current default version

You can use module spider arm-map to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access

ARM MAP is available to all OSC users. We have 64 seats with 80 HPC tokens. Users can monitor the ARM License Server Status.

Publisher/Vendor and License Type

ARM, Commercial

Usage

Set-up

To load the default version of the ARM MAP module, use module load arm-map. To select a particular software version, use module load arm-map/version. For example, use module load arm-map/6.0 to load ARM MAP version 6.0, provided the version is available on the cluster in use. 

Note: Before you run MAP from the command line for the first time, open MAP as a GUI from OnDemand to configure with appropriate settings for your environment.

Using ARM MAP

Profiling HPC software with ARM MAP typically involves three steps: 

1. Prepare the executable for profiling.

Regular executables can be profiled with ARM MAP, but source code line detail will not be available. You need executables with debugging information to view source code line detail: re-compile your code with a -g  option added among the other appropriate compiler options. For example:

mpicc wave.c -o wave -g -O3

This executable built with the debug flag can be used for ARM Performance Reports as well.

Note: The -g flag turns off all optimizations by default. For profiling your code you should use the same optimizations as your regular executable, so explicitly include the -On flag, where n is your normal level of optimization, typically -O2 or -O3, as well as any other compiler optimization options.

2. Run your code to produce the profile data file (.map file).

Profiles are normally generated in a batch job.  To generate a MAP profile for an MPI program:

module load arm-map
map --profile -np <num proc> --mpiargs="<mpi args>" <program> <program args>

where <num procs> is the number of MPI processes to use, <mpi args> represents arguments to be passed to srun (other than -n), <program> is the executable to be run and <program args> represents arguments passed to your program.

For example, if you normally run your program with mpiexec -n 12 wave_c, you would use

map --profile -np 12 wave_c

To profile a non-MPI program:

module load arm-map
map --profile --no-mpi <program> <program args>

The profile data is saved in a .map file in your current directory.

As a result of this step, a .map file that is the profile data file is created in your current directory. The file name is based on the executable name, number of processes, date and time, for example, wave_c_12p_2016-02-05_12-46.map.

For more details on using ARM MAP, refer to the ARM Forge User Guide.

3. Analyze the profile data file using either the ARM local client or the MAP GUI.

You can open the profile data file using a client running on your local desktop computer. For client installation and usage instructions, please refer to the section: Client Download and Setup. This option typically offers the best performance.

Alternatively, you can run MAP in interactive mode, which launches the graphical user interface (GUI).  For example:

map wave_c_12p_2016-02-05_12-46.map

For the GUI application, one should use an OnDemand VDI (Virtual Desktop Interface) or have X11 forwarding enabled (see Setting up X Windows). Note that X11 forwarding can be distractingly slow for interactive applications.

MAP with GPU

ARM MAP can be used for CUDA codes. If you have an executable compiled with the CUDA library, you can launch ARM MAP with

map {executable}

For more information, please read the Chapter 15 of the ARM Forge User Guide.

Client Download and Setup

1. Download the client.

To download the client, go to the ARM website and choose the appropriate ARM Forge remote client download for Windows, Mac, or Linux. For Windows and Mac, just double click on the downloaded file and allow the installer to run. For Linux, extract the tar file using the command tar -xf file_name and run the installer in the extracted file directory with ./installer. Please contact OSC Help, if you have any issues on downloading the client.

2. Configure the client.

After installation, you can configure the client as follows:

  • Open the client program. For Windows or Mac, just click the desktop icon or navigate to the application through its file path. For Linux use the command {arm-forge-path}/bin/map.

  • Once the program is launched, select ARM MAP in the left column.
  • In the Remote Launch drop down menu, select "Configure...".
  • Click Add to create a new profile for your login.
  • In the Host Name section, type your ssh connection. For example: "username@ruby.osc.edu".
  • For Remote Installation Directory, type /usr/local/arm/forge-{version}, specifying the ARM Forge version number that created the data profile file you are attempting to view. For example, /usr/local/arm/forge-7.0 for ARM Forge version 7.0.
  • You can test your login information by clicking Test Remote Launch. It will ask your password. Use the same password for the cluster login.
  • Close the Configure window. You will see a new option under the Remote Launch drop down menu for the host name you entered. Select your profile and login with your password. 
  • If the login was successful, then you should see License Serial:XXX in the bottom left corner of the window.

This login configuration is needed only for the first time of use. In subsequent times, you can just select your profile.

3. Open the profile data file.

After login, click on LOAD PROFILE DATA FILE. This opens a file browser of your home directory on the OSC cluster you logged onto. Go to the directory that contains the .map file and select it. This will open the file and allow you to navigate the source code line-by-line and investigate the performance characteristics. 

A license is not required to simply open the client, so it is possible to skip 2. Configure the client, if you download the profile data file to your desktop. You can then open it by just selecting LOAD PROFILE DATA FILE and navigating through a file browser on your local system.

Note that the client is ARM Forge, a client that contains ARM MAP and ARM DDT. ARM DDT is a debugger, and OSC has license only for ARM MAP. If you need a debugger, you can use Totalview instead.

Further Reading

See Also

Documentation Attachment: 
Supercomputer: 
Service: 
Fields of Science: 

ARM DDT

Arm DDT is a graphical debugger for HPC applications. It supports pthreads, OpenMP, or MPI code on CPU, GPU, and MIC based architectures.

Availability & Restrictions

Versions

The Arm DDT versions currently available at OSC are:

Version Owens Pitzer
22.0.2 X* X*
* Current default version

You can use module spider arm-ddt to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access

Arm DDT is available to all OSC users. We have 64 seats with 80 HPC tokens. Users can monitor the Arm License Server Status.

Publisher/Vendor and License Type

ARM, Commercial

Usage

Set-up

To load the module for the Arm DDT default version, use module load arm-ddt. To select a particular software version, use module load arm-ddt/version. For example, use module load arm-ddt/7.0 to load Arm DDT version 7.0, provided the version is available on the OSC cluster in use.

Note: Before you run DDT from the command line for the first time, open DDT as a GUI from OnDemand to configure with appropriate settings for your environment.

Using Arm DDT

DDT debugs executables to generate DDT reports. The program can be used to debug third-party code as well as code you develop yourself. DDT reports are normally generated in a batch job.

To generate a DDT report for an MPI program:

module load arm-ddt
ddt --offline -np <num procs> --mpiargs="<mpi args>" <program> <program args>

where <num procs> is the number of MPI processes to use, <mpi args> represents arguments to be passed to mpiexec (other than -n or -np), <program> is the executable to be run and <program args> represents arguments passed to your program.

For example, if you normally run your program with mpiexec -n 12 wave_c, you would use

ddt --offline -np 12 wave_c

To debug a non-MPI program:

module load arm-ddt
ddt --offline --no-mpi <program> <program args>

The DDT report is created in html format. The file names are based on the executable name, number of processes, date and time, for example, wave_c_12p_2016-02-05_12-46.html. To open the report use

firefox wave_c_12p_2016-02-05_12-46.html

Using the Arm DDT GUI

To debug with the DDT GUI remove the --offline option. For example, to debug the MPI program above, use

ddt -np 12 wave_c

For a non-MPI program:

ddt --no-mpi <program> <program args>

This will open the DDT GUI, enabling interactive debugging options.

For the GUI application, one should use an OnDemand VDI (Virtual Desktop Interface) or have X11 forwarding enabled (see Setting up X Windows). Note that X11 forwarding can be distractingly slow for interactive applications.

For more details, see the Arm DDT developer page.

DDT with GPU

DDT can be used for CUDA codes. If you have an executable compiled with the CUDA library, you can launch Arm Performance Reports with

ddt {executable}

For more information, please read the chapter 14 of the Arm Forge User Guide.

Supercomputer: 

AlphaFold

AlphaFold is a software package that provides an implementation of the inference pipeline of AlphaFold v2.0. This is a completely new model that was entered in CASP14 and pusblished in Nature.

Availability and Restrictions

Versions

Version Pitzer Ascend Model Parameters
2.0.0 X   2021-07-14
2.1.0 X   2021-10-27
2.1.2 X*   2022-01-19
2.2.2 X X* 2022-03-02; Multimer model weights: v2
2.3.1 X X 2022-12-06; Multimer model weights: v3
2.3.2 X X 2022-12-06; Multimer model weights: v3
* Current default version

You can use module spider alphafold to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access

AlphaFold is available for all OSC users

Publisher/Vendor/Repository and License Type

Copyright 2021 DeepMind Technologies Limited

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0. Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See hte License for the specific language governing permissions and limitations under the License. See the License for specific langauge governing permissions and limitations under the License.

The AlphaFold parameters are made available for non-commercial use only, under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. You can find details at: https://creativecommons.org/licenses/by-nc/4.0/legalcode.

Usage

Usage on Pitzer

Set-up

To load the default version of AlphaFold module, use module load alphafold.

Batch Usage

Below is the example batch script (job.txt) for an alphafold job:

#!/bin/bash
#SBATCH --ntasks=8
#SBATCH --gpus-per-node=1
#SBATCH --gpu_cmode=shared


module reset
module load alphafold/2.1.2

run_alphafold.sh --use_gpu_relax=True --db_preset=reduced_dbs --fasta_paths=rcs_pdb_6VR4.fasta --max_template_date=2020-05-14 --output_dir=$(pwd -P)/output

The control options and presets for model and database:

Option Preset Note
--model_preset monomer
monomer_casp14
monomer_ptm
multimer
Control which AlphaFold model to run
--db_preset full_dbs
reduced_dbs

Control MSA speed/quality tradeoff

To get full-options list

run_alphafold.sh --helpshort

For very large simulations use multiple GPUs and to make sure a job can access all the GPU memory, set this before run_alphafold.sh with alphafold/2.2.2:

export TF_FORCE_UNIFIED_MEMORY=1
run_alphafold.sh ...

Note also that not all models are parallelized over multiple GPUs; see https://github.com/deepmind/alphafold/issues/30

Use custom AlphaFold

From 2.1.2 to 2.2.2, you can use own copy of AlphaFold code with our pre-installed packages and database. For example, you download a copy of AlphaFold 2.2.2 in $HOME/alphafold and make some changes. Modify the ALPHAFOLD_HOME variable before calling run_alphafold.sh, e.g.

module reset
module load alphafold/2.2.2

export ALPHAFOLD_HOME=$HOME/alphafold
run_alphafold.sh --db_preset=reduced_dbs --fasta_paths=rcs_pdb_6VR4.fasta --max_template_date=2020-05-14 --output_dir=$(pwd -P)/output

Batch Usage (2.0.0)

Below is the example batch script (job.txt) for an alphafold job:

#!/bin/bash
#SBATCH --ntasks=8
#SBATCH --gpus-per-node=2

module reset
module load alphafold/2.0.0

run_alphafold.sh --preset=reduced_dbs --fasta_paths=rcs_pdb_6VR4.fasta --max_template_date=2020-05-14 --output_dir=$(pwd -P)/output

Other available job options are:

--preset=recued_dbs, --preset=full_dbs, or --preset=casp14

 

Further Reading

Online documentation is available on the AlphaFold homepage.

Notes on AlphaFold output.

Notes on citing AlphaFold.

Tag: 
Supercomputer: 
Service: 
Fields of Science: 

Altair HyperWorks

HyperWorks is a high-performance, comprehensive toolbox of CAE software for engineering design and simulation.

Availability & Restrictions

Versions

The following version of Altair Hyperworks can be found for the following environments:

Version Owens
13 X
2017.1 X
2019.2 X*
2020.0 X
* Current Default Version

You can use module spider hyperworks to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access

HyperWorks is available to all academic clients. Please contact OSC Help to request the appropriate form for access.

Publisher/Vendor/Repository and License Type

Altair Engineering, Commercial (state-wide)

Usage

Using HyperWorks through OSC installation

To use HyperWorks on the OSC clusters, first ensure that X11 forwarding is enabled as the HyperWorks workbench is a graphical application. Then, load the hyperworks module:

module load hyperworks

The HyperWorks GUI can be launched then with the following command:

hw

The Hypermesh GUI can be launched then with the following command:

hm

State-wide access for HyperWorks

For information on downloading and installing a local copy through the state-wide license, follow the steps below. The versions of HyperWorks available statewide differ from the versions available at OSC on the Owens cluster. To check for the available statewide versions, complete steps 1 through 5 below.

NOTE: To run Altair HyperWorks, your computer must have access to the internet. The software contacts the license server at OSC to check out a license when it starts and periodically during execution. The amount of data transferred is small, so network connections over modems are acceptable.

 

Usage of HyperWorks on a local machine using the statewide license will vary from installation to installation.

  1. Go to https://altairone.com/home

  2. If you have already registered with the Altair website, click on "Sign In" in the upper right hand corner of the page, enter the e-mail address that you registered with and your password and skip to step #4. Otherwise click the "Sign Up" button instead and continue with step #3.

  3. You will be prompted for some contact information and an e-mail address which will be your unique identifier.

    • IMPORTANT: The e-mail address you give must be from your academic institution. Under the statewide license agreement, registration from Ohio universities is allowed on the Altair web site. Trying to log in with a yahoo or hotmail e-mail account will not work. If you enter your university e-mail and the system will not register you, please contact OSChelp at oschelp@osc.edu.

  4. Once you have logged in, go back to the home page and click on the button labeled "Altair Marketplace", where you can then press the button "Browse the Marketplace" which takes you to the Marketplace page.

  5. From here, you can search for the app you would like to use, in this case you're looking for the one listed as "HyperWorks" which you can search for in the search bar at the upper left corner of the Marketplace page.

  6. To download, you just need to press the "Download" button that appears in the side window that pops up after selecting the HyperWorks application from the marketplace page. From there you need to select the version you'd like and the target operating system for which it will run on. Then press the button that looks like an arrow point down at a "U" (aka Download symbol). In addition to downloading the software, download the "Installation Guide and Release Notes" for instructions on how to install the software.

    • NOTE: If you are a student and you click on the HyperWorks application in the marketplace, after creating an account and logging in, but see a "Try Now" button instead of a "Download" button then you may have not been added to the university account correctly (A known issue). To remedy this, please email support@altair.com with your name plus email, and ask the support team to update the account permissions so you can download the software.

    • IMPORTANT: If you have any questions or problems, please contact OSChelp at oschelp@osc.edu, rather than HyperWorks support. The software agreements outlines that problems should first be sent to OSC. If the OSC support line cannot answer or resolve the question, they have the ability to raise the problem to Altair support. If you have any general questions, or are looking for answers to frequently asked questions, you can check the Community Forums page for possible answers or help. But if you have problems, make sure to extend them to OSC first as stated above.

  7. Please contact OSC Help for further instruction and license server information. In order to be added to the allowed list for the state-wide software access, we will need your IP address/range of machine that will be running this software. 

  8. You need to set an environment variable (ALTAIR_LICENSE_PATH) on your local machine to point at our license server (7790@license6.osc.edu). See this link for instructions if necessary.

Further Reading

For more information about HyperWorks, see the following:

See Also

Supercomputer: 
Service: 
Fields of Science: 

Apptainer/Singularity

Apptainer/Singularity is a container system designed for use on High Performance Computing (HPC) systems. It allows users to run both Docker and Singularity containers.

From the Docker website: "A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings."

On June 21th, 2022, Singularity is replaced with Apptainer, which is just a renamed open-source project to avoid conflicts with SingularityCE so it can be accepted into the Linux Foundation. Apptainer 1.0 has the same code as Singularity after versions 3.8.x, and still provides the command singularity (apptainer is the official command). Thus, user should continue running containers on OSC systems without any issue: 

1. Containers built with Apptainer will continue to work with installations of Singularity.
2. User will see warnings about SINGULARITY_ and SINGULARITYENV_ environment variables.
    A future version of Apptainer may stop supporting environment variable compatibility so we recommned
    users to add respective APPTAINER_ and APPTAINERENV_ counterparts in their job environments.

For more detail, pleae visit the Singularity Compatibility page.

If you experience issues using Singularity after downtime, please contact OSC help.

Availability and Restrictions

Versions

Apptainer/Singularity is available on all OSC clusters. Only one version is available at any given time. To find out the current version:

singularity version

Check the release page for the changelog: https://github.com/apptainer/apptainer/releases

Access

Apptainer/Singularity is available to all OSC users.

Publisher/Vendor/Repository and License Type

Apptainer project, established as Apptainer a Series of LF Projects LLC; 3-clause BSD License

Usage

Set-up

No setup is required. You can use Apptainer/Singularity directly on all clusters.

Using Singularity

See HOWTO: Use Docker and Singularity Containers at OSC for information about using Apptainer/Singularity on all OSC clusters, including some site-specific caveats.  

Example:  Run a container from the Singularity hub

[owens-login01]$ singularity run shub://singularityhub/hello-world
INFO:    Downloading library image
Tacotacotaco
If unsure about the amount of memory that a singularity process will require, then be sure to request an entire node for the job. It is common for singularity jobs to be killed by the OOM killer because of using too much RAM.


Known Issues

Reached your pull rate limit

Update: 06/16/2021 
Version: all

You might encounter an error while pulling a large Docker image:

ERROR: toomanyrequests: Too Many Requests.

or

You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limits.

On November 20, 2020, Docker Hub puts rate limits on anonymous and free authenticated pull requests. The rate limits for anonymous and authenticated pulls are 100 per 6 hours and 200 per 6 hours, respectively. Anonymous users have limits enforced via IP. Since all computing nodes at OSC share the same IP, anonymous pull rate limit is shared by all OSC users if you are not authenticated. 

If you encounter this error and want to get rid of it, please consider setting up authenticated access to  Docker Hub: https://apptainer.org/docs/user/main/endpoint.html?highlight=endpoint#ma....

 

Failed to pull a large Docker image

Update: 05/21/2019 
Version: all

You might encounter an error while pulling a large Docker image:

[owens-login01]$ singularity pull docker://qimme2/core
FATAL: Unable to pull docker://qiime2/core While running mksquashfs: signal: killed

The process could be killed because the image is cached in the home directory which is a slower file system or the image size might exceed a single file size limit.

The solution is to use other file systems like /fs/ess/scratch and $TMPDIR for caches and temp files to build the squashfs filesystem:

[owens-login01]$ sinteractive -n 1 -A PAS1234 
bash-4.2$ export APPTAINER_CACHEDIR=$TMPDIR
bash-4.2$ export APPTAINER_TMPDIR=$TMPDIR 
bash-4.2$ singularity pull docker://qiime2/core:2019.1
INFO:    Converting OCI blobs to SIF format
INFO:    Starting build...
...
INFO:    Creating SIF file...
bash-4.2$ exit

Failed to run a container directly or pull an image from Singularity or Docker hub

Update: 03/08/2019 
Version: all

You might encounter an error while run a container directly from a hub:

[owens-login01]$ singularity run shub://vsoch/hello-world
Progress |===================================| 100.0%
FATAL: container creation failed: mount error: can't mount image /proc/self/fd/13: failed to find loop device: could not attach image file too loop device: No loop devices available

One solution is to remove the Singularity cached images from local cache directory $HOME/.apptainer/cache.

singulariy cache clean

Alternatively, you can change the Singularity cache directory to different location by setting the variable APPTAINER_CACHEDIR. For example, in a batch job:

#/bin/bash
#SBATCH --job-name="singularity_test"
#SBSTCH --ntasks=1

export APPTAINER_CACHEDIR=$TMPDIR
singularity run shub://vsoch/hello-world

Workshop

Further Reading

Supercomputer: 

BLAS

The BLAS (Basic Linear Algebra Subprograms) are routines that provide standard building blocks for performing basic vector and matrix operations.

Availability and Restrictions

Access

A highly optimized implementation of the BLAS is available on all OSC clusters as part of the Intel Math Kernel Library (MKL). We recommend that you use MKL rather than building the BLAS for yourself. MKL is available to all OSC users.

Usage

See OSC's MKL software page for usage information. Note that there is no library named libblas.a or libblas.so. The flag "-lblas" on your link line will not work. You should modify your makefile or build script to link to the MKL libraries instead.

Further Reading

Service: 
Technologies: 
Fields of Science: 

BLAST

The BLAST programs are widely used tools for searching DNA and protein databases for sequence similarity to identify homologs to a query sequence. While often referred to as just "BLAST", this can really be thought of as a set of programs: blastp, blastn, blastx, tblastn, and tblastx.

Availability & Restrictions

Versions

The following versions of BLAST are available on OSC systems: 

Version Owens Pitzer
2.4.0+ X  
2.8.0+   X
2.8.1+ X  
2.10.0+ X* X*
2.11.0+ X X
2.13.0+ X X
* Current Default Version

You can use module spider blast to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

If you need to use blastx, you will need to load one of the C++ implimenations modules of blast (any version with a "+").

Access

BLAST is available to all OSC users. If you have any questions, please contact OSC Help.

Publisher/Vendor/Repository and License Type

National Institutes of Health, Open source

Usage

Set-up

To load BLAST, type the following into the command line:

module load blast

Then create a resource file .ncbirc, and put it under your home directory.

Using BLAST

The five flavors of BLAST mentioned above perform the following tasks:

  • blastp: compares an amino acid query sequence against a protein sequence database

  • blastn: compares a nucleotide query sequence against a nucleotide sequence database

  • blastx: compares the six-frame conceptual translation products of a nucleotide query sequence (both strands) against a protein sequence database

  • tblastn: compares a protein query sequence against a nucleotide sequence database dynamically translated in all six reading frames (both strands).

  • tblastx: compares the six-frame translations of a nucleotide query sequence against the six-frame translations of a nucleotide sequence database. (Due to the nature of tblastx, gapped alignments are not available with this option)

NCBI BLAST Database

Information on the NCBI BLAST database can be found here. https://www.osc.edu/resources/available_software/scientific_database_list/blast_database 

We provide local access to nt and refseq_protein databases. You can access the database by loading desired blast-database modules. If you need other databases, please send a request email to OSC Help .

Batch Usage

A sample batch script on Owens and Pitzer is below:

#!/bin/bash
## --ntasks-per-node can be increased upto 48 on Pitzer
#SBATCH --nodes=1 --ntasks-per-node=28 
#SBATCH --time=00:10:00
#SBATCH --job-name Blast
#SBATCH --account=<project-account>

module load blast
module load blast-database/2018-08

cp 100.fasta $TMPDIR
cd $TMPDIR

tblastn -db nt -query 100.fasta -num_threads 16 -out 100_tblastn.out

cp 100_tblastn.out $SLURM_SUBMIT_DIR

Further Reading

Supercomputer: 
Service: 
Fields of Science: 

BWA

BWA is a software package for mapping low-divergent sequences against a large reference genome, such as the human genome. It consists of three algorithms: BWA-backtrack, BWA-SW and BWA-MEM.

Availability and Restrictions

Versions

The following versions of BWA are available on OSC clusters:

Version Owens Pitzer
0.7.17-r1198 X* X*
* Current default version

You can use module spider bwa to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access

BWA is available to all OSC users. If you have any questions, please contact OSC Help.

Publisher/Vendor/Repository and License Type

Li H. and Durbin R., Open source

Usage

Usage on Owens

Set-up

To configure your environment for use of BWA, run the following command: module load bwa. The default version will be loaded. To select a particular BWA version, use module load bwa/version. For example, use module load bwa/0.7.13 to load BWA 0.7.13.

Usage on Pitzer

Set-up

To configure your environment for use of BWA, run the following command: module load bwa. The default version will be loaded.

Further Reading

Supercomputer: 
Service: 
Fields of Science: 

BamTools

BamTools provides both a programmer's API and an end-user's toolkit for handling BAM files.

Availability and Restrictions

Versions

The following versions of BamTools are available on OSC clusters:

Version Owens Pitzer
2.2.2  X*  
2.3.0   X*
* Current default version

You can use module spider bamtools to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access

BamTools is available to all OSC users. If you have any questions, please contact OSC Help.

Publisher/Vendor/Repository and License Type

Derek Barnett, Erik Garrison, Gabor Marth, and Michael Stromberg/ Open Source

Usage

Usage on Owens

Set-up

To configure your environment for use of BamTools, run the following command: module load bamtools. The default version will be loaded. To select a particular BamTools version, use module load bamtools/version. For example, use module load bamtools/2.2.2 to load BamTools 2.2.2.

Usage on Pitzer

Set-up

To configure your environment for use of BamTools, run the following command: module load bamtools. The default version will be loaded.

Further Reading

Supercomputer: 
Service: 
Fields of Science: 

Bismark

Bismark is a program to map bisulfite treated sequencing reads to a genome of interest and perform methylation calls in a single step.

Availability and Restrictions

Versions

The following versions of bedtools are available on OSC clusters:

Version Owens Pitzer
0.22.1 X* X*
0.22.3 X X
* Current default version

You can use module spider bismark to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access

Bismark is available to all OSC users. If you have any questions, please contact OSC Help.

Publisher/Vendor/Repository and License Type

Babraham Bioinformatics, GNU GPL v3

Usage

Usage on Owens

Set-up

To configure your environment for use of metilene, run the following command: module load bismark. The default version will be loaded. To select a particular Bismark version, use module load bismark/version. For example, use module load bismark/0.22.1 to load Bismark 0.22.1.

Usage on Pitzer

Set-up

To configure your environment for use of metilene, run the following command: module load bismark. The default version will be loaded. To select a particular Bismark version, use module load bismark/version. For example, use module load bismark/0.22.1 to load Bismark 0.22.1.

Further Reading

Supercomputer: 
Service: 
Fields of Science: 

Blender

Blender is the free and open source 3D creation suite. It supports the entirety of the 3D pipeline—modeling, rigging, animation, simulation, rendering, compositing and motion tracking, even video editing and game creation.

Availability and Restrictions

Versions

The following versions of Blender are available on OSC systems: 

Version Owens Pitzer
2.79 X*  
2.91 X X*
3.6.3 X X
* Current default version

You can use module spider blender to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access

Blender is available to all OSC users. If you have any questions, please contact OSC Help.

Publisher/Vendor/Repository and License Type

Blender Foundation, Open source

Usage

Set-up for Blender 3.6.3

module load blender/3.6.3

Using Blender 3.6.3

To run software-accelerated Blender, run either of the following equivalent commands:

apptainer exec $BLENDER_IMG blender
apptainer exec $BLENDER_IMG blender-softwaregl

Set-up for Blender 2.X

On Pitzer or Owens-Desktop 'vis' or 'any' node type, run the following command:

module load blender

Using Blender 2.X

To run hardware-rendering version of blender, connect to OSC OnDemand and luanch a virtual desktop, either a Lightweight Desktop or an Interactive HPC 'vis' type Desktop, and in desktop open a terminal and run blender with VirtualGL

module load virtualgl
vglrun blender

You can also run software-rendering version of blender on any type Desktop: 

blender-softwaregl

Further Reading

 

Tag: 
Supercomputer: 
Service: 
Fields of Science: 

Boost

Boost is a set of C++ libraries that provide helpful data structures and numerous support functions in a wide range of aspects of programming, such as, image processing, gpu programming, concurrent programming, along with many algorithms.  Boost is portable and performs well on a wide variety of platforms.

Availability & Restrictions

Versions

The following version of Boost are available on OSC systems:

Version Owens Pitzer Ascend Notes
1.53.0 System Install     No Module Needed
1.56.0        
1.63.0 X(GI)      
1.64.0 X(GI)      
1.67.0 X(GI) X(GI)    
1.72.0 X(GI)* X(GI)*    
1.75.0 X(I) X(I)    
1.78.0     X(G)*  
* Current default version; G = available with gnu; I = available with intel

You can use module spider boost to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access

Boost is available to all OSC users. If you have any questions, please contact OSC Help.

Publisher/Vendor/Repository and License Type

Beman Dawes, David Abrahams, Rene Rivera/ Open source

Usage

Usage on Owens

Set-up

Initalizing the system for use of the Boost library is independent of the compiler you are using.  To load the boost module run the following command:

module load boost

Building With Boost

The following environment variables are setup when the Boost library is loaded:

VARIABLE USE
$BOOST_CFLAGS Use during your compilation step for C++ programs.
$BOOST_LIBS Use during your link step.

 

Below is a set of example commands used to build and run a file called  example2.cpp. First copy the example2.cpp and jayne.txt from Oakley into your home directory with the following commands:

cp /usr/local/src/boost/boost-1_56_0/test.osc/example2.cpp ~
cp /usr/local/src/boost/boost-1_56_0/test.osc/jayne.txt ~
Then compile and test the program with the folllowing commands:
g++ example2.cpp -o boostTest -lboost_regex
./boostTest < jayne.txt

Usage on Pitzer

Set-up

Initalizing the system for use of the Boost library is independent of the compiler you are using.  To load the boost module run the following command:

module load boost

Building With Boost

The following environment variables are setup when the Boost library is loaded:

VARIABLE USE
$BOOST_CFLAGS Use during your compilation step for C++ programs.
$BOOST_LIBS Use during your link step.

 

Below is a set of example commands used to build and run a file called  example2.cpp. First copy the example2.cpp and jayne.txt from Oakley into your home directory with the following commands:

cp /usr/local/src/boost/boost-1_56_0/test.osc/example2.cpp ~
cp /usr/local/src/boost/boost-1_56_0/test.osc/jayne.txt ~
Then compile and test the program with the folllowing commands:
g++ example2.cpp -o boostTest -lboost_regex
./boostTest < jayne.txt

Usage on Ascend

Set-up

Initalizing the system for use of the Boost library is independent of the compiler you are using.  To load the boost module run the following command:

module load boost

Building With Boost

The following environment variables are setup when the Boost library is loaded:

VARIABLE USE
$BOOST_CFLAGS Use during your compilation step for C++ programs.
$BOOST_LIBS Use during your link step.

 

Below is a set of example commands used to build and run a file called  example2.cpp. First copy the example2.cpp and jayne.txt from Oakley into your home directory with the following commands:

cp /usr/local/src/boost/boost-1_56_0/test.osc/example2.cpp ~
cp /usr/local/src/boost/boost-1_56_0/test.osc/jayne.txt ~
Then compile and test the program with the folllowing commands:
g++ example2.cpp -o boostTest -lboost_regex
./boostTest < jayne.txt

Further Reading

 

Supercomputer: 
Service: 
Fields of Science: 

Bowtie1

Bowtie1 is an ultrafast, memory-efficient short read aligner. It aligns short DNA sequences (reads) to the human genome at a rate of over 25 million 35-bp reads per hour. Bowtie indexes the genome with a Burrows-Wheeler index to keep its memory footprint small: typically about 2.2 GB for the human genome (2.9 GB for paired-end).

Availability and Restrictions

Versions

The following versions of Bowtie1 are available on OSC clusters:

Version Owens Pitzer
1.1.2 X*  
1.2.2   X*
* Current default version

You can use module spider bowtie1 to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access

Bowtie1 is available to all OSC users. If you have any questions, please contact OSC Help.

Publisher/Vendor/Repository and License Type

Ben Langmead et al., Open source (Artistic 2.0)

Usage

Usage on Owens

Set-up

To configure your environment for use of Bowtie1, run the following command:  module load bowtie1. The default version will be loaded. To select a particular Bowtie1 version, use  module load bowtie/version. For example, use  module load bowtie1/1.1.2 to load Bowtie1 1.1.2.

Usage on Pitzer

Set-up

To configure your environment for use of Bowtie1, run the following command:  module load bowtie1. The default version will be loaded. 

Further Reading

Supercomputer: 
Service: 
Fields of Science: 

Bowtie2

Bowtie2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes. Bowtie 2 indexes the genome with an FM Index to keep its memory footprint small: for the human genome, its memory footprint is typically around 3.2 GB. Bowtie 2 supports gapped, local, and paired-end alignment modes.

Please note that bowtie (and tophat) CANNOT run in parallel, that is, on multiple nodes.  Submitting multi-node jobs will only waste resources.  In addition you must explicitly include the '-p' option to use multiple threads on a single node.

Availability and Restrictions

Versions

The following versions of Bowtie2 are available on OSC clusters:

Version Owens Pitzer Note
2.2.9 X    
2.3.4.3   X  
2.4.1 X* X* Python 3 rqeuired for all python scripts
* Current default version

You can use module spider bowtie2 to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access

Bowtie2 is available to all OSC users. If you have any questions, please contact OSC Help.

Publisher/Vendor/Repository and License Type

Ben Langmead et al., Open source

Usage

Usage on Owens

Set-up

To configure your environment for use of Bowtie2, run the following command: module load bowtie2. The default version will be loaded. To select a particular Bowtie2 version, use module load bowtie2/version. For example, use  module load bowtie2/2.2.9 to load Bowtie2 2.2.9.

Usage on Pitzer

Set-up

To configure your environment for use of Bowtie2, run the following command: module load bowtie2. The default version will be loaded.

Further Reading

Supercomputer: 
Service: 
Fields of Science: 

CIAO

CIAO (also known as Chandra Interactive Analysis of Observations) is a X-Ray telescope analysis software package for astronomical observation. CIAO focuses on the Chandra X-ray observatory. It contains a toolset used to analyze fits files and is commonly used in conjuction with DS9 and Sherpa, and focuses on data flexibility. 

Availability and Restrictions

Versions

CIAO is available on Pitzer and Owens Clusters. The versions currently available at OSC are:

Version Owens Pitzer Notes
4.14 X* X*  
* Current default version

You can use module spider ciao to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Publisher/Vendor/Repository and License Type

Harvard & Smithsonian (Public)

Usage

Usage on Owens

Set-up

To load the default version of CIAO module, use  module load ciao. For a list of all available CIAO versions and the format expected, type:  module spider ciao. To select a particular software version, use   module load ciao/version. For example, use  module load ciao/4.14 to load CIAO version 4.14. 

Running CIAO

The following command will start an interactive, command line version of CIAO:

ciaorun 

 type ciaorun help for a complete list of command line options.

The commands listed above will run CIAO on the login node you are connected to. As the login node is a shared resource, running scripts that require significant computational resources will impact the usability of the cluster for others. As such, you should not use interactive CIAO sessions on the login node for any significant computation. If your CIAO script requires significant time, CPU power, or memory, you should run your code via the batch system.

Batch Usage

When you log into owens.osc.edu you are actually logged into a Linux box referred to as the login node. To gain access to the multiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

 

Usage on Pitzer

Set-up

To load the default version of CIAO module, use  module load ciao. For a list of all available CIAO versions and the format expected, type:  module spider ciao. To select a particular software version, use   module load ciao/version. For example, use  module load ciao/4.14 to load CIAO version 4.14. 

Running CIAO

The following command will start an interactive, command line version of CIAO:

ciaorun 

 type ciaorun help  for a complete list of command line options.

The commands listed above will run CIAO on the login node you are connected to. As the login node is a shared resource, running scripts that require significant computational resources will impact the usability of the cluster for others. As such, you should not use interactive CIAO sessions on the login node for any significant computation. If your CIAO script requires significant time, CPU power, or memory, you should run your code via the batch system.

Batch Usage

When you log into owens.osc.edu you are actually logged into a Linux box referred to as the login node. To gain access to the multiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

Further Reading

Official documentation can be obtained from the CIAO's Website

References

  1. DS9
Supercomputer: 
Service: 
Fields of Science: 

CMake

CMake is a family of compilation tools that can be used to build, test and package software.

Availability and Restrictions

Versions

The current versions of CMake available at OSC are:

Version Owens Pitzer Ascend
2.8.12.2 X# X#  
3.1.1      
3.6.1 X    
3.7.2 X    
3.11.4 X X  
3.16.5 X X  
3.17.2 X* X*  
3.18.2     X#
3.20.5   X  
3.25.2 X X X*
* Current default version; # System version

You can use  module spider cmake  to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access

CMake is available to all OSC users.

Publisher/Vendor/Repository and License Type

Aaron C. Meadows et al., Open source

Usage

Usage on Owens

Set-up

To configure your environment for use of CMake, run the following command:  module load cmake . The default version will be loaded. To select a particular CMake version, use  module load cmake/version . For example, use   module load cmake/3.6.1  to load CMake 3.6.1. For the system version of CMake, you can use it without module load

Usage on Pitzer

Set-up

To configure your environment for use of CMake, run the following command:  module load cmake . The default version will be loaded.

Further Reading

For more information, visit the CMake homepage.

Supercomputer: 

COMSOL

COMSOL Multiphysics (formerly FEMLAB) is a finite element analysis and solver software package for various physics and engineering applications, especially coupled phenomena, or multiphysics. owned and supported by COMSOL, Inc.

Availability and Restrictions

Versions

COMSOL is available on the Owens clusters. The versions currently available at OSC are:

Version Owens
52a X
53a X
5.4 X
5.5 X*
6.0 X
6.2 X
* Current default version

You can use module spider comsol  to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

Access for Academic Users

COMSOL is for academic use, available only to the Ohio State University users. OSC does not provide COMSOL licenses for academic use to students and faculty outside of the Ohio State University due to licensing restrictions. If you or your institution have a network COMSOL license server, you may be able to use it on OSC. For connections to your license server from OSC, please read this document. If you need further help, please contact OSC Help.

To use COMSOL you will have to be added to the license server.  Please contact OSC Help to be added.

Access for Commercial Users

Contact OSC Help for getting access to COMSOL if you are a commercial user. 

Publisher/Vendor/Repository and License Type

Comsol Inc., Commercial

Usage

Usage on Owens

Set-up

To load the default version of COMSOL module, use  module load comsol . To select a particular software version, use   module load comsol/version . For example, use  module load comsol/52a  to load COMSOL version 5.2a. 

Batch Usage

When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your analysis to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

Interactive Batch Session
For an interactive batch session, one can run the following command:
sinteractive -A <project-account> -N 1 -n 28 -t 1:00:00 -L comsolscript@osc:1
which gives you 28 cores ( -N 1 -n 28 ) with 1 hour ( -t 1:00:00 ). You may adjust the numbers per your need.
Non-interactive Batch Job (Serial Run)

Assume that you have had a comsol script file  mycomsol.m  in your working direcory ( $SLURM_SUBMIT_DIR ). Below is the example batch script ( job.txt ) for a serial run: 

#!/bin/bash
#SBATCH --time=1:00:00
#SBATCH --nodes=1 --ntasks-per-node=1
#SBATCH -L comsolscript@osc:1
#SBATCH --account=<project-account>
#
# The following lines set up the COMSOL environment
#
module load comsol
#
# Use TMPDIR for best performance
cp -p mycomsol.m $TMPDIR
cd $TMPDIR
#
# Run COMSOL
#
comsol batch mycomsol
#
# Now, copy data (or move) back once the simulation has completed
#
cp -p * $SLURM_SUBMIT_DIR
Non-interactive Batch Job (Parallel Run for COMSOL 6.0 and Later)

Below is the example batch script for a parallel job using COMSOL 6.0 or later versions:

#!/bin/bash
#SBATCH --time=1:00:00
#SBATCH --nodes=2 --ntasks-per-node=4 --cpus-per-task=7
#SBATCH -L comsolscript@osc:1
#SBATCH --account=<project-account>

module load comsol
echo "--- Copy Input Files to TMPDIR and Change Disk to TMPDIR"
cp input_cluster.mph $TMPDIR
cd $TMPDIR

echo "--- COMSOL run"
comsol batch -mpibootstrap slurm -inputfile input_cluster.mph -outputfile output_cluster.mph
echo "--- Copy files back"
cp output_cluster.mph output_cluster.mph.status ${SLURM_SUBMIT_DIR}
echo "---Job finished at: 'date'"
echo "---------------------------------------------"

Note:

  • Use the "-mpibootstrap slurm" option to take the resource specification from the SBATCH directives, thus eliminating the -nnhost, -nn, and -np options.  For more details see https://www.comsol.com/support/knowledgebase/1001
  • Copy files from your directory to $TMPDIR.
  • Provide the name of the input file and output file.
OLD Non-interactive Batch Job (Parallel Run for COMSOL 4.3 and Later)

As of version 4.3, it is not necessary to start up MPD before launching a COMSOL job. Below is the example batch script ( job.txt ) for a parallel run using COMSOL 4.3 or later versions:

#!/bin/bash
#SBATCH --time=1:00:00
#SBATCH --nodes=2 --ntasks-per-node=28
#SBATCH -L comsolscript@osc:1
#SBATCH --account=<project-account>

module load comsol
echo "--- Copy Input Files to TMPDIR and Change Disk to TMPDIR"
cp input_cluster.mph $TMPDIR
cd $TMPDIR

echo "--- COMSOL run"
comsol -nn 2 batch -mpirsh ssh -inputfile input_cluster.mph -outputfile output_cluster.mph
echo "--- Copy files back"
cp output_cluster.mph output_cluster.mph.status ${SLURM_SUBMIT_DIR}
echo "---Job finished at: 'date'"
echo "---------------------------------------------"

Note:

  • Set nodes to 2 and ppn to 28 ( --nodes=2 --ntasks-per-node=28). You can change the values per your need.
  • Use "-mpirsh ssh" option for multi-node jobs
  • Copy files from your directory to $TMPDIR.
  • Provide the name of the input file and output file.

Avaliable COMSOL modules with OSC's academic license

Note: Last updated 02/05/24

AC/DC Module
Battery Design Module
CAD Import Module
CFD Module
Chemical Reaction Engineering Module
Heat Transfer Module
LiveLink for MATLAB
MEMS Module
Microfluidics Module
Particle Tracing Module
RF Module
Semiconductor Module
Structural Mechanics Module
Subsurface Flow Module

    Further Reading

    Supercomputer: 
    Service: 

    Interactive Parallel COMSOL Job

    This documentation is to discuss how to set up an interactive parallel COMSOL job at OSC. The following example demonstrates the process of using COMSOL version 5.1 on Oakley. Depending on the version of COMSOL and cluster you work on, there mighe be some differences from the example. Feel free to contact OSC Help if you have any questions. 

    • Launch COMSOL GUI application following the instructions on this page. Get the information on the node(s) allocated to your job and save it in the file named hostfile using the following command:

     

    cat $PBS_NODEFILE | uniq > hostfile
    

    Make sure the hostfile is located in the same directory where you COMSOL input file is put

    • Open COMSOL GUI application. To enable the cluster compuitng feature, click the show button and select Advanced Study Options, as shown in the picture below:

    Advanced study

    • In the Model Builder, right-click a Study node and select Cluster Computing, as shown in the picture below:

    cluster computing

    • In the Cluster Computing node's setting window, select General from the Cluster type list. Provide the name of Host file as hostfile. Browse to the directory where your COMSOL input file is located, as shown in the picture below:

    setting

    • Save all the settings. Then you should be able to run an interactive parallel COMSOL job at OSC
    Supercomputer: 

    CP2K

    CP2K is a quantum chemistry and solid state physics software package that can perform atomistic simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems. CP2K provides a general framework for different modeling methods such as DFT using the mixed Gaussian and plane waves approaches GPW and GAPW. Supported theory levels include DFTB, LDA, GGA, MP2, RPA, semi-empirical methods and classical force fields. CP2K can do simulations of molecular dynamics, metadynamics, Monte Carlo, Ehrenfest dynamics, vibrational analysis, core level spectroscopy, energy minimization, and transition state optimization using NEB or dimer method.

    Availability and Restrictions

    Versions

    CP2K is available on the OSC clusters. These are the versions currently available:

    VERSION Owens Pitzer Notes
    6.1* X X   (owens) gnu/7.3.0 intelmpi/2018.3
      (pitzer) gnu/4.8.5 openmpi/3.1.6-hpcx
      (pitzer) gnu/7.3.0 intelmpi/2018.3
      (pitzer) gnu/7.3.0 openmpi/3.1.4-hpcx
    7.1 X X   gnu/8.4.0 intelmpi/2019.7
    2022.2   X   gnu/11.2.0 openmpi/4.1.4-hpcx

    You can use module spider cp2k to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    CP2K is available to all OSC users.

    Publisher/Vendor/Repository and License Type

    CP2K, GNU General Public License

    Usage

    IMPORTANT NOTE: You need to load the prerequisite compiler and MPI modules before you can load CP2K. To determine those modules, use module spider cp2k/{version}.
    We have found some types of CP2K jobs would fail or crash nodes using cp2k.popt and cp2k.psmp from MVAPICH2 and OpenMPI builds due to unstable numerical issues. It is recommended to use Intel MPI, which is more stable unless you experience any other issue.

    Usage

    Set-up

    CP2K usage is controlled via modules. Load one of the CP2K modulefiles at the command line, in your shell initialization script, or in your batch scripts. You need to load the prerequisite compiler and MPI modules before you can load CP2K. To determine those modules, use module spider cp2k/7.1

    Batch Usage

    When you log into pitzer.osc.edu you are actually logged into the login node. To gain access to the vast resources in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.  Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.

    Interactive Batch Session

    For an interactive batch session one can run the following command:

    sinteractive -A <project-account> -n 1 -t 00:20:00
    

    which requests one core (-n 1), for a walltime of 20 minutes (-t 00:20:00). You may adjust the numbers per your need.

    Non-interactive Batch Job

    batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Below is the example batch script for a parallel run:

    #!/bin/bash 
    #SBATCH --nodes=2
    #SBATCH --time=1:00:0
    #SBATCH --account=<project-account>
    #SBATCH --gres=pfsdir
    
    module load  gnu/8.4.0 intelmpi/2019.7
    module load cp2k/7.1
    module list
    
    
    cp job.inp $PFSDIR/job.inp
    cd $PFSDIR
    srun cp2k.popt -i job.inp -o job.out.$SLURM_JOB_ID 
    cp job.out.$SLURM_JOB_ID $SLURM_SUBMIT_DIR/job.out.$SLURM_JOB_ID
    

    This script uses the Scratch storage system, which is designed to synchronize storage across nodes temporarily, more information is available under the storage documentation in the "Further reading" section.

    Known Issues

    Update: 06/10/2021 
    Version: 6.1

    CP2K 6.1 Floating-point exception on Pitzer Cascade Lakes (48-core) node:

    Program received signal SIGFPE: Floating-point exception - erroneous arithmetic operation.
    
    Backtrace for this error:

    Thid could be a bug in libxsmm 1.9.0 which is released on Mar 15, 2018 (Cascade Lake is launched in 2019). The bug has been fixed in cp2k/7.1.

    Further Reading

    General documentation is available from the CP2K website.
    Scratch Storage documentation is available from the Storage Guide

     
    Supercomputer: 
    Service: 

    CUDA

    CUDA™ (Compute Unified Device Architecture) is a parallel computing platform and programming model developed by Nvidia that enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).

    Availability and Restrictions

    Versions

    CUDA is available on the clusters supporting GPUs. The versions currently available at OSC are:

    Version Owens Pitzer Ascend cuDNN library
    8.0.44 X     5.1.5
    8.0.61 X     6.0.21
    9.0.176 X X   7.3.0
    9.1.85 X X   6.0.21 and 7.0.5
    9.2.88 X X   7.1.4
    10.0.130 X X   7.2.4
    10.1.168 X X   7.6.5
    10.2.89 X* X*   7.6.5
    11.0.3 X X X 8.0.5
    11.1.1 X X   8.0.5
    11.2.2 X X   8.1.1
    11.5.2 X X   8.3.2
    11.6.1 X X X 8.3.2
    11.6.2     X  
    11.7.1     X*  
    11.8.0 X X X 8.8.1
    * Current default version
    From CUDA 11 onwards, applications compiled with a CUDA major release can have minor version compatibility, meaning you may run a CUDA 11 application with any CUDA 11.x toolkit. See https://docs.nvidia.com/deploy/cuda-compatibility/#minor-version-compati... for more detail.

    You can use module spider cuda to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    CUDA is available for use by all OSC users.

    Publisher/Vendor/Repository and License Type

    Nvidia, Freeware 

    Usage

    Usage on Owens

    Set-up on Owens

    To load the default version of CUDA module, use module load cuda. To select a particular software version, use   module load cuda/version

    GPU Computing SDK

    The NVIDIA GPU Computing SDK provides hundreds of code samples and covers a wide range of applications/techniques to help you get started on the path of writing software with CUDA C/C++ or DirectCompute. 

    Programming in CUDA

    Please visit the following link to learn programming in CUDA, http://developer.nvidia.com/cuda-education-training. The link also contains tutorials on optimizing CUDA codes to obtain greater speedups.

    Compiling CUDA Code

    Many of the tools loaded with the CUDA module can be used regardless of the compiler modules loaded. However, CUDA codes are compiled with nvcc, which depends on the GNU compilers. In particular, if you are trying to compile CUDA codes and encounter a compiler error such as

    #error -- unsupported GNU version! gcc versions later than X are not supported!
    

    then you need to load an older GNU compiler with the module load gnu/version command (if compiling standard C code with GNU compilers) or the module load gcc-compatibility/version command (if compiling standard C code with Intel or PGI compilers).

    One can type module show cuda-version-number to view the list of environment variables.
    To compile a cuda code contained in a file, let say mycudaApp.cu, the following could be done after loading the appropriate CUDA module: nvcc -o mycudaApp mycudaApp.cu. This will create an executable by name mycudaApp.

    The environment variable OSC_CUDA_ARCH defined in the module can be used to specify the CUDA_ARCH, to compile with nvcc -o mycudaApp -arch=$OSC_CUDA_ARCH mycudaApp.cu.

    Important: The devices are configured in exclusive mode. This means that 'cudaSetDevice' should NOT be used if requesting one GPU resource. Once the first call to CUDA is executed, the system will figure out which device it is using. If both cards per node is in use by a single application, please use 'cudaSetDevice'.

    Debugging CUDA code

    cuda-gdb can be used to debug CUDA codes. module load cuda will make it available to you. For more information on how to use the CUDA-GDB please visit http://developer.nvidia.com/cuda-gdb.

    Detecting memory access errors

    CUDA-MEMCHECK could be used for detecting the source and cause of memory access errors in your program. For more information on how to use CUDA-MEMCHECK please visit http://developer.nvidia.com/cuda-memcheck.

    Setting the GPU compute mode on Owens

    The GPUs on Owens can be set to different compute modes as listed here.

    The default compute mode is the default setting on our GPU nodes (--gpu_cmode=shared), so you don't need to specify if you require this mode. With this mode, mulitple CUDA processes across GPU nodes are allowed, e.g CUDA processes via MPI. So, if you need to run a MPI-CUDA job, just keep the default compute mode. Should you need to use another compute mode, use --gpu_cmode to specify the mode setting. For example:

    --nodes=1 --ntasks-per-node=28 --gpus-per-node=1 --gpu_cmode=exclusive
    

    Batch Usage on Owens

    When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Interactive Batch Session

    For an interactive batch session one can run the following command:

    sinteractive -A <project-account> -N 1 -n 28 -g 1 -t 00:20:00 
    

    which requests one whole node with 28 cores (-N 1 -n 1), for a walltime of 20 minutes (-t 00:20:00), with one gpu (-g 1). You may adjust the numbers per your need.

    Non-interactive Batch Job (Serial Run)

    batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Below is the example batch script (job.txt) for a serial run:

    #!/bin/bash
    #SBATCH -- time=01:00:00
    #SBATCH --nodes=1 --ntasks-per-node=1:gpus=1
    #SBATCH --job-name compute
    #SBATCH --account=<project-account>
    
    module load cuda
    cd $HOME/cuda
    cp mycudaApp $TMPDIR
    cd $TMPDIR
    ./mycudaApp

    Usage on Pitzer

    Set-up on Pitzer

    To load the default version of CUDA module, use module load cuda.

    GPU Computing SDK

    The NVIDIA GPU Computing SDK provides hundreds of code samples and covers a wide range of applications/techniques to help you get started on the path of writing software with CUDA C/C++ or DirectCompute. 

    Programming in CUDA

    Please visit the following link to learn programming in CUDA, http://developer.nvidia.com/cuda-education-training. The link also contains tutorials on optimizing CUDA codes to obtain greater speedups.

    Compiling CUDA Code

    Many of the tools loaded with the CUDA module can be used regardless of the compiler modules loaded. However, CUDA codes are compiled with nvcc, which depends on the GNU compilers. In particular, if you are trying to compile CUDA codes and encounter a compiler error such as

    #error -- unsupported GNU version! gcc versions later than X are not supported!
    

    then you need to load an older GNU compiler with the module load gnu/version command (if compiling standard C code with GNU compilers) or the module load gcc-compatibility/version command (if compiling standard C code with Intel or PGI compilers).

    One can type module show cuda-version-number to view the list of environment variables.
    To compile a cuda code contained in a file, let say mycudaApp.cu, the following could be done after loading the appropriate CUDA module: nvcc -o mycudaApp mycudaApp.cu. This will create an executable by name mycudaApp.

    The environment variable OSC_CUDA_ARCH defined in the module can be used to specify the CUDA_ARCH, to compile with nvcc -o mycudaApp -arch=$OSC_CUDA_ARCH mycudaApp.cu.

    Important: The devices are configured in exclusive mode. This means that 'cudaSetDevice' should NOT be used if requesting one GPU resource. Once the first call to CUDA is executed, the system will figure out which device it is using. If both cards per node is in use by a single application, please use 'cudaSetDevice'.

    Debugging CUDA code

    cuda-gdb can be used to debug CUDA codes. module load cuda will make it available to you. For more information on how to use the CUDA-GDB please visit http://developer.nvidia.com/cuda-gdb.

    Detecting memory access errors

    CUDA-MEMCHECK could be used for detecting the source and cause of memory access errors in your program. For more information on how to use CUDA-MEMCHECK please visit http://developer.nvidia.com/cuda-memcheck.

    Setting the GPU compute mode on Pitzer

    The GPUs on Pitzer can be set to different compute modes as listed here.

    The default compute mode is the default setting on our GPU nodes (--gpu_cmode=shared), so you don't need to specify if you require this mode. With this mode, mulitple CUDA processes across GPU nodes are allowed, e.g CUDA processes via MPI. So, if you need to run a MPI-CUDA job, just keep the default compute mode. Should you need to use another compute mode, use --gpu_cmode to specify the mode setting. For example:

    --nodes=1 --ntasks-per-node=40 --gpus-per-node=1 --gpu_cmode=exclusive
    

    Batch Usage on Pitzer

    When you log into pitzer.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Interactive Batch Session

    For an interactive batch session one can run the following command:

    sinteractive -A <project-account> -N 1 -n 40 -g 2 -t 00:20:00
    

    which requests one whole node (-N 1), 40 cores (-n 40), 2 gpus (-g 2), and a walltime of 20 minutes (-t 00:20:00). You may adjust the numbers per your need.

    Non-interactive Batch Job (Serial Run)

    batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Below is the example batch script (job.txt) for a serial run:

    #!/bin/bash
    #SBATCH --time=01:00:00
    #SBATCH --nodes=1 --ntasks-per-node=1 --gpus-per-node=1
    #SBATCH --job-name Compute
    #SBATCH --account=<project-account>
    
    module load cuda
    cd $HOME/cuda
    cp mycudaApp $TMPDIR
    cd $TMPDIR
    ./mycudaApp
    

    GNU Compiler Support for NVCC 

    CUDA Version Max supported GCC version
    9.2.88 - 10.0.130 7
    10.1.168 - 10.2.89 8
    11.0 9
    11.1 - 11.4.0 10
    11.4.1 - 11.8 11
    12.0 12.1

    Further Reading

    Online documentation is available on the CUDA homepage.

    Compiler support for the latest version of CUDA is available here.

    CUDA optimization techniques.

     

    Supercomputer: 
    Service: 
    Technologies: 
    Fields of Science: 

    Caffe

    Caffe is "a fast open framework for deep learning."

    From their README:

    Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Center (BVLC) and by community contributors. Yangqing Jia created  the project during his PhD at UC Berkeley. Caffe is released under the BSD 2-Clause license.
    

    Caffe also includes interfaces for both Python and Matlab, which have been built but have not been tested.

    Availability and Restrictions

    Versions

    The following versions of Caffe are available on OSC clusters:

    Version Owens
    1.0.0-rc3 X*
    * Current Default Version

    You can use module spider caffe to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    The current version of Caffe on Owens requires cuda/8.0.44 for GPU calculations.

    Access 

    Caffe is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Berkeley AI Research, Open source

    Usage

    Usage on Owens

    Setup on Owens

    To configure the Owens cluster for the use of Caffe, use the following commands:

    module load caffe
    

    Batch Usage on Owens

    Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations for Owens, and Scheduling Policies and Limits for more info.  In particular, Caffe should be run on a GPU-enabled compute node.

    An Example of Using Caffe with MNIST Training Data on Owens

    Below is an example batch script (job.txt) for using Caffe, see this resource for a detailed explanation http://caffe.berkeleyvision.org/gathered/examples/mnist.html

    #!/bin/bash
    #SBATCH --job-name=Caffe
    #SBATCH --nodes=1 --ntask-per-node=28:gpu
    #SBATCH --time=30:00
    #SBATCH --account <project-account>
    
    . /etc/profile.d/lmod.sh
    # Load the modules for Caffe
    ml caffe
    # Migrate to job temp directory and copy folders over
    cd $TMPDIR
    cp -r $CAFFE_HOME/{examples,data} .
    # Download, create, train
    ./data/mnist/get_mnist.sh
    ./examples/mnist/create_mnist.sh
    ./examples/mnist/train_lenet.sh
    # Serialize log files
    echo; echo 'LOG 1'
    cat convert_mnist_data.bin.$(hostname)*
    echo; echo 'LOG 2'
    cat caffe.INFO
    echo; echo 'LOG 3'
    cat convert_mnist_data.bin.INFO
    echo; echo 'LOG 4'
    cat caffe.$(hostname).*
    cp examples/mnist/lenet_iter_10000.* $SLURM_SUBMIT_DIR
    

    In order to run it via the batch system, submit the job.txt file with the following command:

    sbatch job.txt
    

    Further Reading

    Supercomputer: 
    Service: 
    Technologies: 
    Fields of Science: 

    Cell Ranger

    Cell Ranger is a cell analysis library for generate feature-barcode matrices, perform Analysis for RNA samples. Cell Ranger works in pipelines for it's RNA  sequencing analysis which allows it to: process raw sequencing output, read alignment, generate gene-cell matrices, and can perform downstream analyses such as clustering and gene expression analysis.

    Availability and Restrictions

    Versions

    Cell Ranger is available on the Pitzer Cluster. The versions currently available at OSC are:

    Version Pitzer Notes
    7.0.0 X*  
    7.2.0 X  
    * Current Default Version

    You can use module spider cellranger to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Cell Ranger is available to only academic OSC users. Please review the license agreement and 10x Privacy Policy before use. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    The 10x Genomics group, Closed source (academic)

    Usage

    Usage on Pitzer

    Set-up

    To configure your environment for use of Cell Ranger, run the following command: module load cellranger. The default version will be loaded. To select a particular Cell Ranger version, use module load cellranger/version. For example, use module load cellranger/7.0.0 to load Cell Ranger 7.0.0.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    Clara Parabricks

    Clara Parabricks is a powerful toolkit designed for genomic analysis. It is primarily designed for GPU computation.

    Availability and Restrictions

    Versions

    Clara Parabricks is available on Pitzer and Owens Clusters. The versions currently available at OSC are the following:

    Version Owens Pitzer Notes
    4.0.0-1 X* X*  
    * Current default version

    You can use module spider clara-parabricks  to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Clara-Parabricks is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Open source

    Usage 

    Usage on Owens

    Set-up

    To load the module for the default version of Parabricks, which initializes your environment for the Clara-Parabricks application, use module load clara-parabricks. To select a particular software version, use module load clara-parabricks/version. For example, use module load clara-parabricks/4.0.0-1 to load Parabricks version 4.0.0-1; and use module help clara-parabricks/4.0.0-1 to view details, such as compiler prerequisites, additional modules required for specific executables, the suffixes of executables, etc.; some versions require specific prerequisite modules, and such details may be obtained with the command   module spider clara-parabricks/version.

    Batch Usage

    When you log into Owens you are actually connected to a login node. To access the compute nodes, you must submit a job to the batch system for execution. Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Interactive Batch Session
    For an interactive batch session, one can run the following command:
    sinteractive -A <project-account> -N 1 -n 28 -t 1:00:00
    
    which gives you one node with 28 cores (-N 1 -n 28), with 1 hour (-t 1:00:00). You may adjust the numbers per your need.

    Usage on Pitzer

    Set-up

    To load the module for the default version of Parabricks, which initializes your environment for the Clara-Parabricks application, use module load clara-parabricks. To select a particular software version, use module load clara-parabricks/version. For example, use module load clara-parabricks/4.0.0-1 to load Parabricks version 4.0.0-1; and use module help clara-parabricks/4.0.0-1 to view details, such as, compiler prerequisites, additional modules required for specific executables, the suffixes of executables, etc.; some versions require specific prerequisite modules, and such details may be obtained with the command   module spider clara-parabricks/version.

    Batch Usage

    When you log into Owens you are actually connected to a login node. To access the compute nodes, you must submit a job to the batch system for execution. Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Interactive Batch Session
    For an interactive batch session, one can run the following command:
    sinteractive -A <project-account> -N 1 -n 28 -t 1:00:00
    
    which gives you one node with 28 cores (-N 1 -n 28), with 1 hour (-t 1:00:00). You may adjust the numbers per your need.

    Further Reading

    Supercomputer: 
    Service: 
    Technologies: 
    Fields of Science: 

    Clustal W

    Clustal W is a multiple sequence alignment program written in C++.

    Availability and Restrictions

    Versions

    The following versions of bedtools are available on OSC clusters:

    Version Owens Pitzer
    2.1 X* X*
    * Current default version

    You can use module spider clustalw to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Clustal W is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    GNU Lesser GPL.

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of metilene, run the following command: module load clustalw. The default version will be loaded. To select a particular MUSCLE version, use module load clustalw/version. For example, use module load clustalw/2.1 to load Clustal W 2.1.

    Usage on Pitzer

    Set-up

    To configure your environment for use of metilene, run the following command: module load clustalw. The default version will be loaded. To select a particular MUSCLE version, use module load clustalw/version. For example, use module load clustalw/2.1 to load Clustal W 2.1.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    Connectome

    Connectome is an open-source visualization and discovery tool used to explore data generated by the Human Connectome Project. The distribution includes wb_view, a GUI-based visualization platform, and wb_command, a command-line program for performing a variety of algorithmic tasks using volume, surface, and grayordinate data.

    Availability and Restrictions

    Versions

    The following versions are available on OSC clusters:

    Version Pitzer
    1.5.0 X*
    * Current default version

    You can use module spider connectome to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Connectome is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Connectome Workbench is made available via the GNU General Public License, version 2. The full license is available from here.

    Usage

    Usage on Pitzer

    Set-up

    To configure your environment for use of connectome, run the following command: module load connectome. The default version will be loaded. To select a particular connectome version, use module load connectome/version. For example, use module load connectome/1.5.0 to load Connectome 1.5.0.

    If you want the hardware-accelerated 3D graphics, use these on a compute node with a GPU:

    module load virutalgl/2.6.5
    module load connectome
    viglrun wb_view

    The command will open the GUI environment, thus we recommend Ondemand VDI or Desktop.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    Connectome Workbench

    Connectome Workbench is an open source, freely available visualization and analysis tool for neuroimaging data, especially data generated by the Human Connectome Project.

    Availability and Restrictions

    Versions

    Connectome Workbench is available on Owens and Pitzer clusters. These are the versions currently available:

    Version Owens Pitzer Notes
    1.3.2  X* X*  
    1.5.0 X X  
    * Current default version

    You can use module spider connectome-workbench to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Connectome Workbench is available to all OSC users.

    Publisher/Vendor/Repository and License Type

    Washington University School of Medicine, GPL

    Usage

    Set-up

    To configure your environment for use of the workbench, run the following command:  module load connectome-workbench virtualgl. The default version will be loaded; the virtualgl module is required as well on some platforms. To select a particular version, use  module load connectome-workbench/version. For example, use  module load connectome-workbench/1.3.2 to load Connectome Workbench 1.3.2.

    Further Reading

    General documentation is available from the Connectome Workbench hompage.

     
    Supercomputer: 
    Service: 
    Fields of Science: 

    Cufflinks

    Cufflinks is a program that analyzes RNA -Seq samples. It assembles aligned RNA-Seq reads into a set of transcripts, then inspects the transcripts to estimate abundances and test for differential expression and regulation in the RNA-Seq reads.

    Availability and Restrictions

    Versions

    Cufflinks is available on the Owens Cluster. The versions currently available at OSC are:

    Version Owens
    2.2.1 X*
    * Current Default Version

    You can use module spider cufflinks to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Cufflinks is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Cole Trapnell et al., Open source

    Usage

    Usage on Owens

    Set-up

    To configure your enviorment for use of Cufflinks, use command module load cufflinks. This will load the default version.

    Further Reading

    Supercomputer: 
    Fields of Science: 

    DS9

    SAOImageDS9 is a astronomical imaging and data visualization application. DS9 provides support for FITS images, binary tables, multiple frame buffers, region manipulation, and colormaps display options

    Availability and Restrictions

    The following versions of DS9 are available on OSC clusters:

    Version Owens Pitzer
    7.8.3 X* X*
    * Current default version

    You can use  module spider ds9to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    DS9 is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Jessica Mink, Smithsonian Astrophysical Observatory/ Open source

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of DS9, run the following command: module load ds9. The default version will be loaded. To select a particular DS9 version, use module load ds9/version . For example, use module load ds9/7.8.3to load DS9 7.8.3.

    Usage on Pitzer

    Set-up

    To configure your environment for use of DS9, run the following command: module load ds9. The default version will be loaded. To select a particular DS9 version, use module load ds9/version . For example, use module load ds9/7.8.3to load DS9 7.8.3.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    DSI Studio

    DSI Studio is a tractography software tool that maps brain connections and correlates findings with neuropsychological disorders. It is a collective implementation of several diffusion MRI methods, including diffusion tensor imaging (DTI), generalized q-sampling imaging (GQI), q-space diffeomorphic reconstruction (QSDR), diffusion MRI connectometry, and generalized deterministic fiber tracking.

    Availability and Restrictions

    The following versions of DSI Studio are available on OSC clusters:

    Version Pitzer
    2.0 X
    * Current default version

    You can use  module spider dsi-studio to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    DSI Studio is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    DSI Studio is free and licensing information for both academic and non-academic licenses is available at the DSI Studio homepage.

    Please refer to the citation page about how to acknowledge DSI Studio.

    Usage

    Usage on Pitzer

    Set-up

    To configure your environment for use of DSI Studio, run the following command: module load dsi-studio. The default version will be loaded. To select a particular version, use module load dsi-studio/version. For example, use module load dsi-studio/2021.May to load DSI Studio 2.0. It is also recommended you use in conjunction with module load singularity.
     

    DSI Studio is installed in a singularity container.  DSI_IMG environment variable contains the container image file path. So, an example usage would be

    module load dsi-studio
    singularity exec $DSI_IMG dsi_studio
    

    This command will open the DSI Studio GUI environment, and we recommend Ondemand VDI or Desktop for GUI. 

    For more information about singularity usages, please read OSC singularity page

     

    Further Reading

     
    Supercomputer: 
    Service: 
    Fields of Science: 

    Darshan

    Darshan is a lightweight "scalable HPC I/O characterization tool".  It is intended to profile I/O by emitting log files to a consistent log location for systems administrators, and also provides scripts to create summary PDFs to characterize I/O in MPI-based programs.

    Availability and Restrictions

    Versions

    The following versions of Darshan are available on OSC clusters:

    Version Owens Pitzer
    3.1.2 X  
    3.1.4 X  
    3.1.5-pre1 X  
    3.1.5 X  
    3.1.6 X X
    3.1.8 X* X*
    3.2.1 X X
    * Current default version

    You can use module spider darshan to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access 

    Darshan is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    MCSD, Argonne National Laboratory, Open source

    Usage

    Usage on Owens & Pitzer

    Setup

    To configure the Owens/Pitzer cluster for Darshan run module spider darshan/VERSION to find supported compiler and MPI implementations, e.g.

    $ module spider darshan/3.2.1
    
    ------------------------------------------------------------------------------------------------
      darshan: darshan/3.2.1
    ------------------------------------------------------------------------------------------------
    
        You will need to load all module(s) on any one of the lines below before the "darshan/3.2.1" module is available to load.
    
          intel/19.0.3  intelmpi/2019.7
          intel/19.0.3  mvapich2/2.3.1
          intel/19.0.3  mvapich2/2.3.2
          intel/19.0.3  mvapich2/2.3.3
          intel/19.0.3  mvapich2/2.3.4
          intel/19.0.3  mvapich2/2.3.5
          intel/19.0.5  intelmpi/2019.3
          intel/19.0.5  intelmpi/2019.7
          intel/19.0.5  mvapich2/2.3.1
          intel/19.0.5  mvapich2/2.3.2
          intel/19.0.5  mvapich2/2.3.3
          intel/19.0.5  mvapich2/2.3.4
          intel/19.0.5  mvapich2/2.3.5
    

    then switch to the favorite programming environment and load the Darshan module:

    $ module load intel/19.0.5 mvapich2/2.3.5
    $ module load darshan/3.2.1
    

    Batch Usage

    Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations (Owens, Pitzer) and Scheduling Policies and Limits for more info. 

    If you have an MPI-based program the syntax is as simple as

    module load darshan
    
    # basic call to darshan
    export MV2_USE_SHARED_MEM=0
    export LD_PRELOAD=$OSC_DARSHAN_DIR/lib/libdarshan.so
    srun [args] ./my_mpi_program
    
    # to show evidence that Darshan is working and to see internal timing
    export DARSHAN_INTERNAL_TIMING=yes
    srun [args] ./my_mpi_program
    
    An Example of Using Darshan with MPI-IO

    Below is an example batch script (darshan_mpi_pfsdir_test.sh) for testing MPI-IO and POSIX-IO.  Because the files generated here are large scratch files there is no need to retain them.

    #!/bin/bash
    #SBATCH --job-name="darshan_mpi_pfsdir_test"
    #SBATCH --ntasks=4
    #SBATCH --ntasks-per-node=2
    #SBATCH --output=rfm_darshan_mpi_pfsdir_test.out
    #SBATCH --time=0:10:0
    #SBATCH -p parallel
    #SBATCH --gres=pfsdir:ess
    
    # Setup Darshan
    module load intel
    module load mvapich2
    module load darshan
    export DARSHAN_LOGFILE=${LMOD_SYSTEM_NAME}_${SLURM_JOB_ID/.*/}_${SLURM_JOB_NAME}.log
    export DARSHAN_INTERNAL_TIMING=yes
    export MV2_USE_SHARED_MEM=0
    export LD_PRELOAD=$OSC_DARSHAN_DIR/lib/libdarshan.so
    
    # Prepare the scratch files and run the cases
    cp ~support/share/reframe/source/darshan/io-sample.c .
    mpicc -o io-sample io-sample.c -lm
    for x in 0 1 2 3; do  dd if=/dev/zero of=$PFSDIR/read_only.$x bs=2097152000 count=1; done
    shopt -s expand_aliases
    srun ./io-sample -p $PFSDIR -b 524288000 -v
    
    # Generat report
    darshan-job-summary.pl --summary $DARSHAN_LOGFILE
    

    In order to run it via the batch system, submit the darshan_mpi_pfsdir_test.sh file with the following command:

    sbatch darshan_mpi_pfsdir_test.sh
    

    Further Reading

    Supercomputer: 
    Service: 
    Technologies: 
    Fields of Science: 

    Desmond

    Desmond is a software package that perform high-speed molecular dynamics simulations of biological systems on conventional commodity clusters, general-purpose supercomputers, and GPUs. The code uses novel parallel algorithms and numerical techniques to achieve high performance and accuracy on platforms containing a large number of processors, but may also be executed on a single computer. Desmond includes code optimized for machines with an NVIDIA GPU.

    Availability and Restrictions

    Versions

    The Desmond package is available on Owens. The versions currently available at OSC are:

    Version Owens Note
    2018.2 X  
    2019.1 X*  
    2020.1 X GPU support only
    2022.4 X GPU support only
    * Current default version

    You can use  module spider desmond to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access for Academic Users 

    Desmond is available to academic OSC users. Please review the license agreement carefully before use. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    D. E. Shaw Research, Non-Commercial

    Usage

    Usage on Owens

    Set-up

    To set up your environment for desmond load one of its module files:

    ​​module load desmond/2018.2
    

    If you already have input and configuration files ready, here is an example batch script that uses Desmond non-interactively via the batch system:

    #!/bin/bash
    #SBATCH --job-name multisim-batch
    #SBATCH --time=0:20:00
    #SBATCH --nodes=1
    #SBATCH --ntasks-per-node=24
    #SBATCH --account=<account>
    
    # Example Desmond single-node batch script. 
    
    sstat -j $SLURM_JOB_ID
    export
    module load desmond/2018.2
    module list
    
    sbcast -p desmondbutane.msj $TMPDIR/desmondbutane.msj
    sbcast -p desmondbutane.cfg $TMPDIR/desmondbutane.cfg
    sbcast -p desmondbutane.cms $TMPDIR/desmondbutane.cms
    
    cd $TMPDIR
    $SCHRODINGER/utilities/multisim -HOST localhost -maxjob 1 -cpu 24 -m desmondbutane.msj -c desmondbutane.cfg desmondbutane.cms -mode umbrella -ATTACHED -WAIT
    ls -l
    cd $SLURM_SUBMIT_DIR
    sgather -r $TMPDIR $SLURM_SUBMIT_DIR

    The WAIT option forces the multisim command to wait until all tasks of the command are completed. This is necessary for batch jobs to run effectively. The HOST option specifies how tasks are distributed over processors.

    Set-up via Maestro

    Desmond comes with the Schrodinger interactive builder, Maestro. To run maestro, connect to OSC OnDemand and luanch a desktop, either via Desktops in the Interactive Apps drop down menu (these were labelled  Virtual Desktop Interface (VDI) previously) or via Shell Access in the Clusters drop down menu (these were labelled  Interactive HPC Desktop previously).  Click "Setup process" below for more detailed instructions.  Note that one cannot launch desmond jobs in maestro via the Schrodinger GUI in the Interactive Apps drop down menu.

    Setup process


    Log in to OSC OnDemand and request a Desktop/VDI session (this first screen shot below does not  reflect the current, 2024, labelling in OnDemand).

    Picture1.png

    In a Desktop/VDI environment, open a terminal and run (this is a critical step; one cannot launch desmond jobs in maestro via the Schrodinger GUI in the Interactive Apps drop down menu.

    module load desmond
    maestro
    

    In the main window of Maestro, you can open File and import structures or create new project

    Screenshot 2022-06-29 120854.png

    Screenshot 2022-06-29 121005.png

    Once the structure is ready, navigate to the top right Tasks icon and find Desmond application; the details of this step depend on the software version; if you do not find desmond listed then use the search bar.

        Tasks >> Browse... > Applications tab >> Desmond

    Screenshot 2022-06-29 121057 (2).png

    In this example a Minimazation job will be done.

    Screenshot 2022-06-29 121120.png

    Make sure the Model system is ready:

      Model system >> Load from workspace >> Load

    You can change the Job name; and you can write out the script and configuration files by clicking Write as shown below:

    Screenshot 2022-06-29 121205.png

    The green text will indicate the job path with the prefix "Job written to...". The path is a new folder located in the working directory indicated earlier.

    Screenshot 2022-06-29 121315.png

    Navigate using the terminal to that directory. You can modify the script to either run the simulation with a GPU or a CPU.

    Run simulation with GPU

    Navigate using the terminal to that directory and add the required SLURM directives and module commands at the top of the script, e.g.: desmond_min_job_1.sh:

    #!/bin/bash
    #SBATCH --time=0:20:00
    #SBATCH --nodes=1
    #SBATCH --ntasks-per-node=8
    #SBATCH --gpus-per-node=1
    #SBATCH --account=<account>
    
    module reset
    module load desmond/2019.1
    
    # Desmond job script starts here

    The setup is complete.


    Run simulation with CPU only
    Navigate using the terminal to that directory and edit the script, e.g.: desmond_min_job_1.sh:

    "${SCHRODINGER}/utilities/multisim" -JOBNAME desmond_min_job_1 -HOST localhost -maxjob 1 -cpu 1 -m desmond_min_job_1.msj -c desmond_min_job_1.cfg -description Minimization desmond_min_job_1.cms -mode umbrella -set stage[1].set_family.md.jlaunch_opt=["-gpu"] -o desmond_min_job_1-out.cms -ATTACHED
    

    Delete the -set stage[1].set_family.md.jlaunch_opt=["-gpu"] argument and change the -cpu argument from 1 to the number of CPUs you want, e.g. 8, resulting in

    "${SCHRODINGER}/utilities/multisim" -JOBNAME desmond_min_job_1 -HOST localhost -maxjob 1 -cpu 8 -m desmond_min_job_1.msj -c desmond_min_job_1.cfg -description Minimization desmond_min_job_1.cms -mode umbrella  -o desmond_min_job_1-out.cms -ATTACHED
    

    Add the required SLURM directives and module commands at the top of the script:

    #!/bin/bash
    #SBATCH --time=0:20:00
    #SBATCH --nodes=1
    #SBATCH --ntasks-per-node=24
    #SBATCH --account=<account>
    
    module reset
    module load desmond/2019.1
    
    # Desmond job script starts here

    The setup is complete.

     

    Further Reading

    Tag: 
    Supercomputer: 
    Service: 

    FASTX-Toolkit

    The FASTX-Toolkit is a collection of command line tools for Short-Reads FASTA/FASTQ files preprocessing.

    Availability and Restrictions

    Verisons

    The following versions of FASTX-Toolkit are available on OSC clusters:

    Version Owens
    0.0.14 X*
    *:Current default version

    You can use  module spider fastx to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    FASTX-Toolkit is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Assaf Gordon, Open source

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of FASTX-Toolkit, run the following command: module load fastx. The default version will be loaded. To select a particular FASTX-Toolkit version, use module load fastx/version. For example, use module load fastx/0.0.14 to load FASTX-Toolkit 0.0.14.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    FFTW

    FFTW is a C subroutine library for computing the Discrete Fourier Transform (DFT) in one or more dimensions, of arbitrary input size, and of both real and complex data. It is portable and performs well on a wide variety of platforms.

    Availability and Restrictions

    Versions

    FFTW is available on Ruby, and Owens Clusters. The versions currently available at OSC are:

    Version Owens Pitzer Ascend
    3.3.4 X    
    3.3.5 X    
    3.3.8  X* X*  
    3.3.10 X X X*
    NOTE: FFTW2 and FFTW3 are tracked separately in the module system

    You can use module spider fftw2  or module spider fftw3  to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    FFTW is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    www.fftw.org, Open source

    Usage

    Usage on Owens

    Set-up

    Initalizing the system for use of the FFTW library is dependent on the system you are using and the compiler you are using. A successful build of your program will depend on an understanding of what module fits your circumstances. To load a particular version, use  module load name . For example, use  module load fftw3/3.3.4  to load FFTW3 version 3.3.4. You can use module spider fftw  to view available modules.

    Building with FFTW

    The following environment variables are setup when the FFTW library is loaded:

    Variable Use
    $FFTW3_CFLAGS Use during your compilation step for C programs.
    $FFTW3_FFLAGS Use during your compilation step for Fortran programs.
    $FFTW3_LIBS Use during your link step for the sequential version of the library.
    $FFTW3_LIBS_OMP Use during your link step for the OpenMP version of the library.
    $FFTW3_LIBS_MPI Use during your link step for the MPI version of the library.
    $FFTW3_LIBS_THREADS Use during your link step for the "threads" version of the library.

    below is a set of example commands used to build a file called my-fftw.c .

    module load fftw3
    icc $FFTW3_CFLAGS my-fftw.c -o my-fftw $FFTW3_LIBS 
    ifort $FFTW3_FFLAGS more-fftw.f -o more-fftw $FFTW3_LIBS
    

    Usage on Pitzer

    Set-up

    Initalizing the system for use of the FFTW library is dependent on the system you are using and the compiler you are using. A successful build of your program will depend on an understanding of what module fits your circumstances. To load a particular version, use  module load fftw3/.

    Building with FFTW

    The following environment variables are setup when the FFTW library is loaded:

    VARIABLE USE
    $FFTW3_CFLAGS Use during your compilation step for C programs.
    $FFTW3_FFLAGS Use during your compilation step for Fortran programs.
    $FFTW3_LIBS Use during your link step for the sequential version of the library.
    $FFTW3_LIBS_OMP Use during your link step for the OpenMP version of the library.
    $FFTW3_LIBS_MPI Use during your link step for the MPI version of the library.

    below is a set of example commands used to build a file called my-fftw.c .

    module load fftw3
    icc $FFTW3_CFLAGS my-fftw.c -o my-fftw $FFTW3_LIBS 
    ifort $FFTW3_FFLAGS more-fftw.f -o more-fftw $FFTW3_LIBS

    Usage on Ascend

    Set-up

    Initalizing the system for use of the FFTW library is dependent on the system you are using and the compiler you are using. A successful build of your program will depend on an understanding of what module fits your circumstances. To load a particular version, use  module spider fftw3 to check what other modules need to be loaded first. Use  module load [module name and version] to load the necessary modules. Then use  module load fftw3 to load the default FFTW module version.

    Building with FFTW

    The following environment variables are setup when the FFTW library is loaded:

    VARIABLE USE
    $FFTW3_CFLAGS Use during your compilation step for C programs.
    $FFTW3_FFLAGS Use during your compilation step for Fortran programs.
    $FFTW3_LIBS Use during your link step for the sequential version of the library.
    $FFTW3_LIBS_OMP Use during your link step for the OpenMP version of the library.
    $FFTW3_LIBS_MPI Use during your link step for the MPI version of the library.

    below is a set of example commands used to build a file called my-fftw.c .

    module load fftw3
    icc $FFTW3_CFLAGS my-fftw.c -o my-fftw $FFTW3_LIBS 
    ifort $FFTW3_FFLAGS more-fftw.f -o more-fftw $FFTW3_LIBS

    Further Reading

    See Also

    Supercomputer: 
    Service: 

    FSL

    FSL is a library of tools for analyzing FMRI, MRI and DTI brain imaging data.

    Availability and Restrictions

    Versions

    The following versions of FSL are available on OSC clusters:

    Version Owens Pitzer
    5.0.10

    X*

     
    6.0.4 X X
    * Curent default version

    You can use module spider fsl to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    FSL is available to academic OSC users. Please review the license agreement carefully before use. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Analysis Group, University of Oxford/ freeware

    Usage

    Usage on Owens and Pitzer

    Set-up

    Configure your environment for use of FSL with module load fsl. This will load the default version.

    Using FSL GUI

    Access the FSL GUI with command for bash

    source $FSLDIR/etc/fslconf/fsl.sh
    fsl

    For csh, one can use

    source $FSLDIR/etc/fslconf/fsl.csh
    fsl 

    This will bring up a menu of all FSL tools. For information on individual FSL tools see FSL Overview page.

    Using BASIL GUI

    module load fsl/6.0.4
    source $FSLDIR/etc/fslconf/fsl.sh
    asl_gui --matplotlib

    For more information, please visit https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/BASIL.

    Further Reading 

    Supercomputer: 
    Fields of Science: 

    FastQC

    FastQC provides quality control checks of high throughput sequence data that identify areas of the data that may cause problems during further analysis.

    Availability and Restrictions

    Versions

    FastQC is available on the Owens cluster. The versions currently available at OSC are:

    Version Owens Pitzer
    0.11.5 X*  
    0.11.7 X  
    0.11.8   X*
    * Current Default Version

    You can use module spider fastqc to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    FastQC is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Babraham Bioinformatics, Open source

    Usage

    Usage on Owens

    Set-up

    To configure your enviorment for use of FastQC, use command module load fastqc. This will load the default version.

    Usage on Pitzer

    Set-up

    To configure your enviorment for use of FastQC, use command module load fastqc. This will load the default version.

    Further Reading

    Supercomputer: 
    Fields of Science: 

    FreeSurfer

    FreeSurfer is a software package used to anaylze nueroimaging data.

    Availability & Restrictions

    Versions

    The following versions of FreeSurfer are available on OSC clusters:

    Version Owens Pitzer Note
    5.3.0 X    
    6.0.0

     X*

    X  
    7.1.1 X X*  
    7.2.0 X X  
    7.3.0 X X  
    * Curent default version

    You can use module spider freesurfer to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    FreeSurfer is available to academic OSC users. Please review the license agreement carefully before use. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Athinoula A. Martinos Center, Open source

    Usage

    Usage on Owens and Pitzer

    Set-up

    Load the FreeSurfer module with  module load freesurfer. This will load the default version. Then, to continue configuring your environment, you must source the setup script for Freesurfer. Do this with the following command that corresponds to the Linux shell you are using. If using bash, use:

    source $FREESURFER_HOME/SetUpFreeSurfer.sh
    

    If using tcsh, use:

    source $FREESURFER_HOME/SetUpFreeSurfer.csh
    

    To finish configuring FreeSurfer, set the the FreeSurfer environment variable SUBJECTS_DIR to the directory of your subject data. The SUBJECTS_DIR variable defaults to the FREESURFER_HOME/subjects directory, so if this is your intended directory to use the enviornment set-up is complete.

    To alter the SUBJECTS_DIR variable, however, again use the following command that corresponds to the Linux shell you are using. For bash:

    export SUBJECTS_DIR=<path to subject data>
    

    For tcsh:

    setenv SUBJECTS_DIR=<path to subject data>
    

    Note that you can set the SUBJECT_DIR variable before or after sourcing the setup script.

    The cuda applications from FreeSurfer requires CUDA 5 library (which is not avaiable through module system). To set up cuda environment, run the following command after load the FreeSurfer module. If  you are using bash, run:

    source $FREESURFER_HOME/bin/cuda5_setup.sh
    

    If using tcsh, use:

    source $FREESURFER_HOME/bin/cuda5_setup.csh
    

    Further Reading 

    Supercomputer: 
    Service: 
    Fields of Science: 

    GAMESS

    The General Atomic and Molecular Electronic Structure System (GAMESS) is a flexible ab initio electronic structure program. Its latest version can perform general valence bond, multiconfiguration self-consistent field, Möller-Plesset, coupled-cluster, and configuration interaction calculations. Geometry optimizations, vibrational frequencies, thermodynamic properties, and solution modeling are available. It performs well on open shell and excited state systems and can model relativistic effects. The GAMESS Home Page has additional information.

    Availability and Restrictions

    Versions

    The current versions of GAMESS available on the Oakley and Owens Clusters are:

    VERSION

    owens Pitzer
    18 AUG 2016 (R1) X  
    30 Sep 2019 (R2) X* X*
    * Current default version

    You can use module spider gamess to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    GAMESS is available to all OSC users. Please review the license agreement carefully before use. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Gordon research group, Iowa State Univ./ Proprietary freeware

    Usage

    Set-up

    GAMESS usage is controlled via modules. Load one of the GAMESS modulefiles at the command line, in your shell initialization script, or in your batch scripts, for example:

    module load gamess  

    Examples

    Further Reading

    General documentation is available from the GAMESS Home page and in the local machine directories.

    Supercomputer: 
    Service: 

    GATK

    GATK is a software package for analysis of high-throughput sequencing data. The toolkit offers a wide variety of tools, with a primary focus on variant discovery and genotyping as well as strong emphasis on data quality assurance.

    Availability and Restrictions

    Versions

    The following versions of GATK are available on OSC clusters:

    Version Owens Pitzer Notes
    3.5 X    
    4.0.11.0   X  
    4.1.2.0 X* X*  
    4.4.0.0 X X  
    * Current default version

    You can use module spider gatk to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access for Academic Users

    GATK4 is available to all OSC users under BSD 3-clause License.

    GATK3 is available to academic OSC users. Please review the license agreement carefully before use. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Broad Institute, Inc., BSD 3-clause License (GATK4 only)

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of GATK, run the following command: module load gatk. The default version will be loaded. To select a particular GATK version, use module load gatk/version. For example, use module load gatk/4.1.2.0 to load GATK 4.1.2.0.

    Usage

    This software is a Java executable .jar file; thus, it is not possible to add to the PATH environment variable. From module load gatk, a new environment variable, GATK, will be set. Thus, users can use the software by running the following command: gatk {other options},e.g  run gatk -h to see all options.

    Usage on Pitzer

    Set-up

    To configure your environment for use of GATK, run the following command: module load gatk. The default version will be loaded.

    Usage

    This software is a Java executable .jar file; thus, it is not possible to add to the PATH environment variable. From module load gatk, a new environment variable, GATK, will be set. Thus, users can use the software by running the following command: gatk {other options},e.g  run gatk -h to see all options.

    Known Issues

    CBLAS undefined symbol error

    Update: 05/22/2019 
    Version: all

    If you use GATK tools that need CBLAS (e.g. CreateReadCountPanelOfNormals), you might encounter an error as

    INFO: successfully loaded /tmp/jniloader1239007313705592313netlib-native_system-linux-x86_64.so
    java: symbol lookup error: /tmp/jniloader1239007313705592313netlib-native_system-linux-x86_64.so: undefined symbol: cblas_dspr
    java: symbol lookup error: /tmp/jniloader1239007313705592313netlib-native_system-linux-x86_64.so: undefined symbol: cblas_dspr

    The error raises because the system-default LAPACK does not support CBLAS.  The remedy is to run GATK in conjunction with lapack/3.8.0:

    $ module load lapack/3.8.0
    $ module load gatk/4.1.2.0
    $ LD_LIBRARY_PATH=$OSC_LAPACK_DIR/lib64 gatk AnyTool toolArgs
    

    Alternatively, we recommend using the GATK container. First, download the GATK container to your home or project directory

    $ qsub -I -l nodes=1:ppn=1
    $ cd $TMPDIR
    $ export SINGULARITY_CACHEDIR=$TMPDIR
    $ SINGULARITY_TMPDIR=$TMPDIR 
    $ singularity pull docker://broadinstitute/gatk:4.1.2.0
    $ cp gatk_4.1.2.0.sif ~/

    Then run any GATK tool via

    $ singularity exec ~/gatk_4.1.2.0.sif gatk AnyTool ToolArgs
    

    You can read more about container in general from here. If you have any further questions, please contact OSC Help.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    GLPK

    GLPK (GNU Linear Programming Kit) is a set of open source LP (linear programming) and MIP (mixed integer problem) routines written in ANSI C, which can be called from within C programs. 

    Availability and Restrictions

    Versions

    The following versions are available on OSC systems:

    Version Owens
    4.60 X*
    * Current default version

    You can use module spider glpk to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    GLPK is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    GNU, Open source

    Usage

    Set-up

    To set up your environment for using GLPK on Oakley, run the following command:

    module load glpk

    Compiling and Linking

    To compile your C code using GLPK API routines, use the environment variable $GLPK_CFLAGS provided by the module:

    gcc $GLPK_CFLAGS -c my_prog.c

    To link your code, use the variable $GLPK_LIBS:

    gcc my_prog.o $GLPK_LIBS -o my_prog

    glpsol

    Additionally, the GLPK module contains a stand-alone LP/MIP solver, which can be used to process files written in the GNU MathProg modeling language.  The solver can be invoked using the following command syntax:

    glpsol [options] [filename]

    For a complete list of options, use the following command:

    glpsol --help

    Further Reading

    Supercomputer: 
    Service: 

    GMAP

    GMAP is a genomic mapping and alignment program for mRNA and EST sequences.

    Availability and Restrictions

    Versions

    The following versions of GMAP are available on OSC clusters:

    Version Owens
    2016-06-09 X*
    * Current default version

    You can use module spider gmap to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    GMAP is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Genentech, Inc., Open source

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of GMAP, run the following command: module load gmap. The default version will be loaded. To select a particular GMAP version, use module load gmap/version. For example, use module load gmap/2016-06-09 to load GMAP 2016-06-09.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    GNU Compilers

    Fortran, C and C++ compilers produced by the GNU Project. 

    Availability and Restrictions

    Versions

    GNU compilers are available on all our clusters. These are the versions currently available:

    Version Owens Pitzer Ascend Notes
    4.8.5 X# X#   **See note below.
    4.9.1        
    5.2.0        
    6.1.0 X      
    6.3.0 X      
    7.3.0 X X    
    8.1.0   X    
    8.4.0 X X   The variant supporting OpenMP and OpenACC offload is available.
    See the GPU offloading section below
    9.1.0 X* X* X  
    10.3.0 X X X  
    11.2.0 X X X*  
    * Current Default Version; # System version
    ** There is always some version of the GNU compilers in the environment. If you want a specific version you should load the appropriate module. If you don't have a gnu module loaded you will get either the system version or some other version, depending on what modules you do have loaded.

    You can use module spider gnu to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    To find out what version of gcc you are using, type gcc --version.

    Access

    The GNU compilers are available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    https://www.gnu.org/software/gcc/, Open source

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of the GNU compilers, run the following command (you may have to unload your selected compiler - if an error message appears, it will provide instructions): module load gnu.  The default version will be loaded. To select a particular GNU version, use module load gnu/version. For example, use module load gnu/4.8.5 to load GNU 4.8.5.

    How to Compile

    Once the module is loaded, follow the guides below for compile commands:

    Language non-mpi mpi
    Fortran 90 or 95 gfortran mpif90
    Fortran 77 gfortran mpif77
    c gcc mpicc
    c++ g++ mpicxx

    Building Options

    The GNU compilers recognize the following command line options :

    Compiler Option Purpose
    -fopenmp Enables compiler recognition of OpenMP directives (except mpif77)
    -o FILENAME

    Specifies the name of the object file

    -O0 or no -O  option Disable optimization
    -O1 or -O Ligh optimization
    -O2 Heavy optimization
    -O3 Most expensive optimization (Recommended)

    There are numerous flags that can be used. For more information run man <compiler binary name>.

    Batch Usage

    When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Interactive Batch Session
    For an interactive batch session, one can run the following command:
    sinteractive -A <project-account> -N 1 -n 28 -l -t 1:00:00
    
    which gives you 1 node and 28 cores (-N 1 -n 28),  with 1 hour (-t 1:00:00).  You may adjust the numbers per your need.
    Non-interactive Batch Job (Serial Run)
    batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. The following example batch script file will use the input file named hello.c and the output file named hello_results
    Below is the example batch script (job.txt) for a serial run:
    #!/bin/bash
    #SBATCH --time=1:00:00
    #SBATCH --nodes=1 --ntasks-per-node=28
    #SBATCH --job-name jobname
    #SBATCH --account=<project-account>
    
    module load gnu
    cp hello.c $TMPDIR
    cd $TMPDIR
    gcc -O3 hello.c -o hello
    ./hello > hello_results
    cp hello_results $SLURM_SUBMIT_DIR
    

    In order to run it via the batch system, submit the job.txt file with the following command:

    sbatch job.txt
    
    Non-interactive Batch Job (Parallel Run)
    Below is the example batch script (job.txt) for a parallel run:
    #!/bin/bash
    #SBATCH --time=1:00:00
    #SBATCH --nodes=2 --ntasks-per-node=28 
    #SBATCH --job-name jobname
    #SBATCH --account=<project-account>
    
    module load gnu
    mpicc -O3 hello.c -o hello
    cp hello $TMPDIR
    cd $TMPDIR
    mpiexec ./hello > hello_results
    cp hello_results $SLURM_SUBMIT_DIR
    

    Usage on Pitzer

    Set-up

    To configure your environment for use of the GNU compilers, run the following command (you may have to unload your selected compiler - if an error message appears, it will provide instructions): module load gnu.  The default version will be loaded. To select a particular GNU version, use module load gnu/version. For example, use module load gnu/8.1.0 to load GNU 8.1.0.

    How to Compile

    Once the module is loaded, follow the guides below for compile commands:

    LANGUAGE NON-MPI MPI
    Fortran 90 or 95 gfortran mpif90
    Fortran 77 gfortran mpif77
    c gcc mpicc
    c++ g++ mpicxx

    Building Options

    The GNU compilers recognize the following command line options :

    COMPILER OPTION PURPOSE
    -fopenmp Enables compiler recognition of OpenMP directives (except mpif77)
    -o FILENAME

    Specifies the name of the object file

    -O0 or no -O  option Disable optimization
    -O1 or -O Ligh optimization
    -O2 Heavy optimization
    -O3 Most expensive optimization (Recommended)

     

     

     

     

     

     

    There are numerous flags that can be used. For more information run man <compiler binary name>.

     

    Known Issues

    Type mismatch error

    GNU compiler versions 10+ may have Fortran compiler errors like

    Error: Type mismatch between actual argument at (1) and actual argument at (2) (REAL(4)/REAL(8))
    

    that result in a error response during configuration

    configure: error: The Fortran compiler gfortran will not compile files that call
    the same routine with arguments of different types.

    This can be caused when codes are using types that don't match the subroutine arguement types. The mismatches are now reject with an error to warn about future errors that may occur. It is bypassable by appending the -fallow-argument-mismatch arguement while calling gfortran.

    Multiple definition error

    GNU compiler versions 10+ may have C compiler errors like

    /.libs/libmca_mtl_psm.a(mtl_psm_component.o): multiple definition of `mca_mtl_psm_component'
    

    This is a common mistake in C is omitting extern when declaring a global variable in a header file. In previous GCC versions this error is ignored. GCC 10 defaults to -fno-common, which means a linker error will now be reported.  It is bypassable by appending the -fcommon to compilation flags.

    Further Reading

    See Also

    Supercomputer: 
    Service: 
    Technologies: 
    Fields of Science: 

    GROMACS

    GROMACS is a versatile package of molecular dynamics simulation programs. It is primarily designed for biochemical molecules, but it has also been used on non-biological systems.  GROMACS generally scales well on OSC platforms. Starting with version 4.6 GROMACS includes GPU acceleration.

    Availability and Restrictions

    Versions

    GROMACS is available on Pitzer and Owens Clusters. Both single and double precision executables are installed. The versions currently available at OSC are the following:

    Version Owens Pitzer Ascend Notes
    5.1.2 SPC     Default version on Owens prior to 09/04/2018
    2016.4 SPC      
    2018.2 SPC SPC    
    2020.2 SPC* SPC*    
    2020.5 SPC SPC    
    2022.1 SPC SPC SPC*  
    2023.2 SPC SPC SPC  
    * Current default version; S = serial single node executables; P = parallel multinode; C = CUDA (GPU)

    You can use module spider gromacs  to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    GROMACS is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    http://www.gromacs.org/ Open source

    Usage 

    Usage on Owens

    Set-up

    To load the module for the default version of GROMACS, which initializes your environment for the GROMACS application, use module load gromacs. To select a particular software version, use module load gromacs/version. For example, use module load gromacs/5.1.2 to load GROMACS version 5.1.2; and use module help gromacs/5.1.2 to view details, such as, compiler prerequisites, additional modules required for specific executables, the suffixes of executables, etc.; some versions require specific prerequisite modules, and such details may be obtained with the command   module spider gromacs/version.

    Using GROMACS

    To execute a serial GROMACS versions 5 program interactively, simply run it on the command line, e.g.:

    gmx pdb2gmx
    

    Parallel multinode GROMACS versions 5 programs should be run in a batch environment with srun, e.g.:

    srun gmx_mpi_d mdrun
    

    Note that '_mpi' indicates a parallel executable and '_d' indicates a program built with double precision ('_gpu' denotes a GPU executable built with CUDA).  See the module help output for specific versions for more details on executable naming conventions.

    Batch Usage

    When you log into Owens you are actually connected to a login node. To  access the compute nodes, you must submit a job to the batch system for execution. Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Interactive Batch Session
    For an interactive batch session on Owens, one can run the following command:
    sinteractive -A <project-account> -N 1 -n 28 -t 1:00:00
    
    which gives you one node with 28 cores (-N 1 -n 28), with 1 hour (-t 1:00:00). You may adjust the numbers per your need.
    Non-interactive Batch Job (Parallel Run)

    batch script can be created and submitted for a serial, cuda (GPU), or parallel run. You can create the batch script using any text editor in a working directory on the system of your choice. Sample batch scripts and input files for all types of hardware resources are available here:

    ~srb/workshops/compchem/gromacs/
    

    This simple batch script demonstrates some important points:

    #!/bin/bash
    # GROMACS Tutorial for Solvation Study of Spider Toxin Peptide
    # see fwspider_tutor.pdf
    #SBATCH --job-name fwsinvacuo.owens
    #SBATCH --nodes=2 --ntasks-per-node=28
    #SBATCH --account=PZS0711
    # turn off verbosity for noisy module commands
    set +vx
    module purge
    module load intel/18.0.3
    module load mvapich2/2.3
    module load gromacs/2018.2
    module list
    set -vx
    
    cd $SLURM_SUBMIT_DIR
    echo $SLURM_SUBMIT_DIR
    sbcast -p 1OMB.pdb $TMPDIR/1OMB.pdb
    sbcast -p em.mdp $TMPDIR/em.mdp
    
    cd $TMPDIR
    mpiexec -ppn 1 gmx pdb2gmx -ignh -ff gromos43a1 -f 1OMB.pdb -o fws.gro -p fws.top -water none
    mpiexec -ppn 1 gmx editconf -f fws.gro -d 0.7
    
    mpiexec -ppn 1 gmx editconf -f out.gro -o fws_ctr.gro -center 2.0715 1.6745 1.914
    mpiexec -ppn 1 gmx grompp -f em.mdp -c fws_ctr.gro -p fws.top -o fws_em.tpr -maxwarn 1
    mpiexec -ppn 1 ls -l 
    mpiexec gmx_mpi mdrun -s fws_em.tpr -o fws_em.trr -c fws_ctr.gro -g em.log -e em.edr
    
    cp -p * $SLURM_SUBMIT_DIR/
    

    * Note that sbcast does not recursively look through folders a loop in the jobscript is needed, please visit our Job Preparations page to learn more.

    Usage on Pitzer

    Set-up

    To load the module for the default version of GROMACS, which initializes your environment for the GROMACS application, use module load gromacs.

    Using GROMACS

    To execute a serial GROMACS versions 5 program interactively, simply run it on the command line, e.g.:

    gmx pdb2gmx
    

    Parallel multinode GROMACS versions 5 programs should be run in a batch environment with srun, e.g.:

    srun gmx_mpi_d mdrun
    

    Note that '_mpi' indicates a parallel executable and '_d' indicates a program built with double precision ('_gpu' denotes a GPU executable built with CUDA).  See the module help output for specific versions for more details on executable naming conventions.

    Batch Usage

    When you log into Pitzer you are actually connected to a login node. To  access the compute nodes, you must submit a job to the batch system for execution. Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Interactive Batch Session
    For an interactive batch session on Pitzer, one can run the following command:
    sinteractive -A <project-account> -N 1 -n 40 -t 1:00:00
    
    which gives you one node and 40 cores (-N 1 -n 40) with 1 hour (-t 1:00:00). You may adjust the numbers per your need.
    Non-interactive Batch Job (Parallel Run)

    batch script can be created and submitted for a serial, cuda (GPU), or parallel run. You can create the batch script using any text editor in a working directory on the system of your choice. Sample batch scripts and input files for all types of hardware resources are available here:

    ~srb/workshops/compchem/gromacs/
    

    This simple batch script demonstrates some important points:

    #!/bin/bash
    # GROMACS Tutorial for Solvation Study of Spider Toxin Peptide
    # see fwspider_tutor.pdf
    #SBATCH --job-name fwsinvacuo.owens
    #SBATCH --nodes=2 --ntasks-per-node=28
    #SBATCH --account=PZS0711
    # turn off verbosity for noisy module commands
    set +vx
    module purge
    module load intel/18.0.3
    module load mvapich2/2.3
    module load gromacs/2018.2
    module list
    set -vx
    
    cd $SLURM_SUBMIT_DIR
    echo $SLURM_SUBMIT_DIR
    sbcast -p 1OMB.pdb $TMPDIR/1OMB.pdb
    sbcast -p em.mdp $TMPDIR/em.mdp
    
    cd $TMPDIR
    mpiexec -ppn 1 gmx pdb2gmx -ignh -ff gromos43a1 -f 1OMB.pdb -o fws.gro -p fws.top -water none
    mpiexec -ppn 1 gmx editconf -f fws.gro -d 0.7
    
    mpiexec -ppn 1 gmx editconf -f out.gro -o fws_ctr.gro -center 2.0715 1.6745 1.914
    mpiexec -ppn 1 gmx grompp -f em.mdp -c fws_ctr.gro -p fws.top -o fws_em.tpr -maxwarn 1
    mpiexec -ppn 1 ls -l 
    mpiexec gmx_mpi mdrun -s fws_em.tpr -o fws_em.trr -c fws_ctr.gro -g em.log -e em.edr
    
    cp -p * $SLURM_SUBMIT_DIR/
    

    * Note that sbcast does not recursively look through folders a loop in the jobscript is needed, please visit our Job Preparations page to learn more 

     

    Further Reading

    Supercomputer: 
    Service: 

    GSL

    GSL is a library of mathematical methods for C and C++ languages.

    Availability and Restrictions

    Versions

    GSL is available on all clusters. The versions currently available at OSC are:

    Version Owens Pitzer Ascend
    2.6 X* X*

     

    2.7.1    

    X*

    * Current default version

    You can use module spider gslto view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    GSL is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    GNU opensource

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of GSL, use the command module load gsl. This will load the default version.

    Usage on Pitzer

    Set-up

    To configure your environment for use of GSL, use the command module load gsl. This will load the default version.

    Usage on Ascend

    Set-up

    To configure your environment for use of GSL, use the command module load gsl. This will load the default version.

    Further Reading

    Supercomputer: 

    Gaussian

    Gaussian is a very popular general purpose electronic structure program. Recent versions can perform density functional theory, Hartree-Fock, Möller-Plesset, coupled-cluster, and configuration interaction calculations among others. Geometry optimizations, vibrational frequencies, magnetic properties, and solution modeling are available. It performs well as black-box software on closed-shell ground state systems. 

    Availability and Restrictions

    Versions

    Gaussian is available on the Pitzer and Owens Clusters. These versions are currently available at OSC (S means single node serial/parallel and C means CUDA, i.e., GPU enabled):

    Version Owens Pitzer Ascend
    g09e01 S    

    g16a03

    S

    S  
    g16b01 SC S  
    g16c01 SC* SC*  
    g16c02     SC*
    * Current default version

    You can use module spider gaussian to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access for Academic Users

    Use of Gaussian for academic purposes requires validation. In order to obtain validation, please contact OSC Help for further instruction.

    Publisher/Vendor/Repository and License Type

    Gaussian, commercial

    Usage

    Usage on Owens

    Set-up on Owens

    To load the default version of the Gaussian module which initalizes your environment for Gaussian, use module load gaussian. To select a particular software version, use module load gaussian/version. For example, use module load gaussian/g09e01 to load Gaussian version g09e01 on Owens. 

    Using Gaussian

    To execute Gaussian, simply run the Gaussian binary (g16 or g09) with the input file on the command line:

    g16 < input.com
    

    When the input file is redirected as above ( < ), the output will be standard output; in this form the output can be seen with viewers or editors when the job is running in a batch queue because the batch output file, which captures standard output, is available in the directory from which the job was submitted.  Alternatively, Gaussian can be invoked without file redirection:

    g16 input.com
    

    in which case the output file will be named 'input.log' and its path will be the working directory when the command started; in this form outputs may not be available when the job is running in a batch queue, for example if the working directory was .

    Batch Usage on Owens

    When you log into owens.osc.edu you are logged into a login node. To gain access to the mutiple processors in the computing environment, you must submit your computations to the batch system for execution. Batch jobs can request mutiple processors and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Interactive Batch Session
    For an interactive batch session on Owens, one can run the following command:
    sinteractive -A <project-account> -N 1 -n 28 -t 1:00:00
    
    which gives you 28 cores (-N 1 -n 28) with 1 hour (-t 1:00:00). You may adjust the numbers per your need.
    Non-interactive Batch Job (Serial Run)

    batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Sample batch scripts and Gaussian input files are available here:

    /users/appl/srb/workshops/compchem/gaussian/
    

    This simple batch script demonstrates the important points:

    #!/bin/bash
    #SBATCH --job-name=GaussianJob
    #SBATCH --nodes=1 --ntasks-per-node=28
    #SBATCH --time=1:00:00
    #SBATCH --account=<project-account>
    
    cp input.com $TMPDIR
    # Use TMPDIR for best performance.
    cd $TMPDIR
    module load gaussian
    g16 input.com
    cp -p input.log *.chk $SLURM_SUBMIT_DIR
    
    Note: OSC does not have a functional distributed parallel version (LINDA) of Gaussian. Parallelism of Gaussian at OSC is only via shared memory. Consequently, do not request more than one node for Gaussian jobs on OSC's clusters.

    Usage on Pitzer

    Set-up on Pitzer

    To load the default version of the Gaussian module which initalizes your environment for Gaussian, use module load gaussian

    Using Gaussian

    To execute Gaussian, simply run the Gaussian binary (g16 or g09) with the input file on the command line:

    g16 < input.com
    

    When the input file is redirected as above ( < ), the output will be standard output; in this form the output can be seen with viewers or editors when the job is running in a batch queue because the batch output file, which captures standard output, is available in the directory from which the job was submitted.  Alternatively, Gaussian can be invoked without file redirection:

    g16 input.com
    

    in which case the output file will be named 'input.log' and its path will be the working directory when the command started; in this form outputs may not be available when the job is running in a batch queue, for example if the working directory was .

    Batch Usage on Pitzer

    When you log into pitzer.osc.edu you are logged into a login node. To gain access to the mutiple processors in the computing environment, you must submit your computations to the batch system for execution. Batch jobs can request mutiple processors and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Interactive Batch Session
    For an interactive batch session on Pitzer, one can run the following command:
    sinteractive -A <project-account> -N 1 -n 40 -t 1:00:00
    
    which gives you 40 cores (-n 40) with 1 hour (-t 1:00:00). You may adjust the numbers per your need.
    Non-interactive Batch Job (Serial Run)

    batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Sample batch scripts and Gaussian input files are available here:

    /users/appl/srb/workshops/compchem/gaussian/
    

    This simple batch script demonstrates the important points:

    #!/bin/bash
    #SBATCH --job-name=GaussianJob
    #SBATCH --nodes=1 --ntasks-per-node=40
    #SBATCH --time=1:00:00
    #SBATCH --account=<project-account>
    
    cp input.com $TMPDIR
    # Use TMPDIR for best performance.
    cd $TMPDIR
    module load gaussian
    g16 input.com
    cp -p input.log *.chk $SLURM_SUBMIT_DIR

    Running Gaussian jobs with GPU

    Gaussian jobs can utilize the P100 GPUS of Owens.  GPUs are not helpful for small jobs but are effective for larger molecules when doing DFT energies, gradients, and frequencies (for both ground and excited states). They are also not used effectively by post-SCF calculations such as MP2 or CCSD. For more

    The above example will utilize CPUs indexed from 0 to 19th, but 0th CPU is associated with 0th GPU.

    A sample batch script for GPU on Owens is as follows:

    #!/bin/bash 
    #SBATCH --job-name=GaussianJob 
    #SBATCH --nodes=1 --ntasks-per-node=40
    #SBATCH --gpus-per-node=1
    #SBATCH --time=1:00:00
    #SBATCH --account=<project-account>
    
    set echo
    cd $TMPDIR
    set INPUT=methane.com
    # SLURM_SUBMIT_DIR refers to the directory from which the job was submitted.
    cp $SLURM_SUBMIT_DIR/$INPUT .
    module load gaussian/g16b01
    g16 < ./$INPUT
    ls -al
    cp -p *.chk $SLURM_SUBMIT_DIR
    

     

    A sample input file for GPU on Owens is as follows:

    %nproc=28
    %mem=8gb
    %CPU=0-27
    %GPUCPU=0=0
    %chk=methane.chk
    #b3lyp/6-31G(d) opt
    methane B3LYP/6-31G(d) opt freq
    0,1
    C        0.000000        0.000000        0.000000
    H        0.000000        0.000000        1.089000
    H        1.026719        0.000000       -0.363000
    H       -0.513360       -0.889165       -0.363000
    H       -0.513360        0.889165       -0.363000

    A sample batch script for GPU on Pitzer is as follows:

    #!/bin/tcsh
    #SBATCH --job-name=methane
    #SBATCH --output=methane.log
    #SBATCH --nodes=1 --ntasks-per-node=48
    #SBATCH --gpus-per-node=1
    #SBATCH --time=1:00:00
    #SBATCH --account=<project-account>
    
    set echo
    cd $TMPDIR
    set INPUT=methane.com
    # SLURM_SUBMIT_DIR refers to the directory from which the job was submitted.
    cp $SLURM_SUBMIT_DIR/$INPUT .
    module load gaussian/g16b01
    g16 < ./$INPUT
    ls -al
    cp -p *.chk $SLURM_SUBMIT_DIR
    

     

    A sample input file for GPU on Pitzer is as follows:

    %nproc=48
    %mem=8gb
    %CPU=0-47
    %GPUCPU=0=0
    %chk=methane.chk
    #b3lyp/6-31G(d) opt
    methane B3LYP/6-31G(d) opt freq
    0,1
    C        0.000000        0.000000        0.000000
    H        0.000000        0.000000        1.089000
    H        1.026719        0.000000       -0.363000
    H       -0.513360       -0.889165       -0.363000
    H       -0.513360        0.889165       -0.363000

    Known Issues

    Out of Memory Problems for Large TMPDIR Jobs

    For some Gaussian jobs, the operating system will start swapping and may trigger the out of memory (OOM) killer because of memory consumption by the local filesystem (TMPDIR) cache.  For these jobs %mem may not be critical, i.e., these jobs may not be big memory jobs per se; it is the disk usage that causes the OOM; known examples of this case are large ONIOM calculations.

    While an investigation is ongoing, a simple workaround is to avoid putting the Gaussian internal files on TMPDIR.  The most obvious alternative to TMPDIR is PFSDIR, in which case the commands are

    ...
    #SBATCH --gres=pfsdir
    ...
    module load gaussian
    export GAUSS_SCRDIR=$PFSDIR
    ...
    

     

    Other workarounds exist; contact oschelp@osc.edu for details.

    g16b01 G4 Problem

    See the known issue and note that g16c01 is the current default module version.

    Further Reading

    Supercomputer: 
    Service: 

    Git

    Git is a version control system used for tracking file changes and facilitating collaborative work.

    Availability and Restrictions

    Versions

    The following versions of Git are available on OSC clusters:

    Version Owens Pitzer
    2.18.0 X* X*
    2.27.1 X X
    2.39.0   X
    * Current default version

    You can use module spider git to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Git is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Git, Open source

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of Git, run the following command: module load git. The default version will be loaded. To select a particular Git version, use module load git/version

    Usage on Pitzer

    Set-up

    To configure your environment for use of Git, run the following command: module load git. The default version will be loaded.

    Further Reading

    Supercomputer: 

    Gnuplot

    Gnuplot is a portable command-line driven data and function plotting utility.  It was originally intended to allow scientists and students to visualize mathematical functions and data.  

    Gnuplot supports many types of plots in two or three dimensions.  It can draw using points, lines, boxes, contours, vector fields surfaces and various associated text.  It also supports various specialized plot types.  

    Gnuplot supports many different types of output:  interactive screen display (with mouse and hotkey functionality), pen plotters (like hpgl), printers (including postscript and many color devices), and file formats as vectorial pseudo-devices like LaTeX, metafont, pdf, svg, or bitmap png.  

    Availability and Restrictions

    Versions

    The current versions of Gnuplot available at OSC are:

    Version Owens Pitzer Notes
    4.6 patchlevel 2 System Install   No module needed.
    5.2.2 X*    
    5.2.4   X*  
    * Current default version

    You can use module spider gnuplot to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Gnuplot is available to all OSC users.

    Publisher/Vendor/Repository and License Type

    Thomas Williams, Colin Kelley/ Open source

    Usage

    Usage on Owens

    To start a Gnuplot session, load the module and launch using the following commands:

    module load gnuplot
    
    gnuplot
    

    To access the Gnuplot help menu, type ? into the Gnuplot command line.  

    Usage on Pitzer

    To start a Gnuplot session, load the module and launch using the following commands:

    module load gnuplot
    
    gnuplot
    

    To access the Gnuplot help menu, type ? into the Gnuplot command line.  

    Further Reading

    For more information, visit the Gnuplot Homepage.  

    Supercomputer: 
    Service: 

    Gurobi

    Gurobi is a mathematical optimization solver that supports a variety of programming and modeling languages.

    Availability and Restrictions

    Versions

    The following versions of bedtools are available on OSC clusters:

    Version Owens
    8.1.1 X*
    9.1.2 X
    10.0.1 X
    * Current default version

    You can use module spider gurobi to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Gurobi is available to academic OSC users with proper validation. In order to obtain validation, please contact OSC Help for further instruction.

    Publisher/Vendor/Repository and License Type

    Gurobi Optimization, LLC/ Free academic floating license

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of gurobi, run the following command: module load gurobi. The default version will be loaded. To select a particular Gurobi version, use module load gurobi/version. For example, use module load gurobi/8.1.1 to load Gurobi 8.1.1.

    You may use Gurobi in Python or Matlab. In either case, you also need to load our gurobi module first in order to use the central license. So, before you use it in Python or Matlab, use module load gurobi.

    In addition, if you are using Gurobi for Matlab then you will need to setup Gurobi inside Matlab:  launch matlab; change to the gurobi directory using the commad cd /usr/local/gurobi/VERSION/matlab (where VERSION is the version of Gurobi you are using); and execute the command gurobi_setup.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    HDF5

    HDF5 is a general purpose library and file format for storing scientific data. HDF5 can store two primary objects: datasets and groups. A dataset is essentially a multidimensional array of data elements, and a group is a structure for organizing objects in an HDF5 file. Using these two basic objects, one can create and store almost any kind of scientific data structure, such as images, arrays of vectors, and structured and unstructured grids.

    Availability and Restrictions

    Versions

    HDF5 is available on the Pitzer and Owens Clusters. The versions currently available at OSC are:

    Version Owens Pitzer Ascend
    1.8.17  X    
    1.8.19 X    
    1.10.2 X X  
    1.10.4 X X  
    1.10.8     X
    1.12.0 X* X*  
    1.12.2 X X X
    * Current Default Version

    You can use module spider hdf5 to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    HDF5 is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    The HDF Group, Open source (academic)

    API Compatibility issue on hdf5/1.12

    hdf5/1.12 may not compatible with applications created with earlier hdf5 versions. In order to work around, users may use a compatibility macro mapping:

    • To compile an application built with a version of HDF5 that includes deprecated symbols (the default), specify: -DH5_USE_110_API (autotools) or –DH5_USE_110_API:BOOL=ON (CMake)

    However, users will not be able to take advantage of some of the new features in 1.12 if using these compatibility mappings. For more detail, please see release note.

    Usage

    Usage on Owens

    Set-up

    Initalizing the system for use of the HDF5 library is dependent on the system you are using and the compiler you are using. To load the default HDF5 library, run the following command: module load hdf5. To load a particular version, use module load hdf5/version. For example, use module load hdf5/1.8.17 to load HDF5 version 1.8.17. You can use module spider hdf5 to view available modules.

    Building With HDF5

    The HDF5 library provides the following variables for use at build time:

    Variable Use
    $HDF5_C_INCLUDE Use during your compilation step for C programs
    $HDF5_CPP_INCLUDE Use during your compilation step for C++ programs (serial version only)
    $HDF5_F90_INCLUDE Use during your compilation step for FORTRAN programs
    $HDF5_C_LIBS Use during your linking step programs
    $HDF5_F90_LIBS

    Use during your linking step for FORTRAN programs

    For example, to build the code myprog.c or myprog.f90 with the hdf5 library you would use:

    icc -c $HDF5_C_INCLUDE myprog.c
    icc -o myprog myprog.o $HDF5_C_LIBS
    ifort -c $HDF5_F90_INCLUDE myprog.f90
    ifort -o myprog myprog.o $HDF5_F90_LIBS
    

    Batch Usage

    When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Non-interactive Batch Job (Serial Run)
    A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Below is the example batch script that executes a program built with the HDF5 library:
    #!/bin/bash
    #SBATCH --job-name=AppNameJob
    #SBATCH --nodes=1 --ntasks-per-node=28
    #SBATCH --account <project-account>
    
    module load hdf5
    cp foo.dat $TMPDIR
    cd $TMPDIR
    appname
    cp foo_out.h5 $SLURM_SUBMIT_DIR
    

    Usage on Pitzer

    Set-up

    Initalizing the system for use of the HDF5 library is dependent on the system you are using and the compiler you are using. To load the default HDF5 library, run the following command: module load hdf5

    Building With HDF5

    The HDF5 library provides the following variables for use at build time:

    VARIABLE USE
    $HDF5_C_INCLUDE Use during your compilation step for C programs
    $HDF5_CPP_INCLUDE Use during your compilation step for C++ programs (serial version only)
    $HDF5_F90_INCLUDE Use during your compilation step for FORTRAN programs
    $HDF5_C_LIBS Use during your linking step programs
    $HDF5_F90_LIBS

    Use during your linking step for FORTRAN programs

    For example, to build the code myprog.c or myprog.f90 with the hdf5 library you would use:

    icc -c $HDF5_C_INCLUDE myprog.c
    icc -o myprog myprog.o $HDF5_C_LIBS
    ifort -c $HDF5_F90_INCLUDE myprog.f90
    ifort -o myprog myprog.o $HDF5_F90_LIBS
    

    Batch Usage

    When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Non-interactive Batch Job (Serial Run)
    batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Below is the example batch script that executes a program built with the HDF5 library:
    #!/bin/bash
    #SBATCH --job-name=AppNameJob 
    #SBATCH --nodes=1 --ntasks-per-node=48
    #SBATCH --account <project-account>
    
    module load hdf5
    cp foo.dat $TMPDIR
    cd $TMPDIR
    appname
    cp foo_out.h5 $SLURM_SUBMIT_DIR

    Further Reading

    Tag: 
    Supercomputer: 
    Service: 

    HDF5-Serial

    HDF5 is a general purpose library and file format for storing scientific data. HDF5 can store two primary objects: datasets and groups. A dataset is essentially a multidimensional array of data elements, and a group is a structure for organizing objects in an HDF5 file. Using these two basic objects, one can create and store almost any kind of scientific data structure, such as images, arrays of vectors, and structured and unstructured grids.

    For mpi-dependent codes, use the non-serial HDF5 module.

    Availability and Restrictions

    Versions

    HDF5 is available for serial code on Pitzer and Owens Clusters. The versions currently available at OSC are:

    Version Owens Pitzer Notes
    1.8.17 X    
    1.8.19      
    1.10.2 X X  
    1.10.4 X X  
    1.10.5 X X  
    1.12.0 X* X*  
    1.12.2 X X  
    * Current Default Version

    You can use module spider hdf5-serial to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    HDF5 is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    The HDF Group, Open source (academic)

    Usage

    Usage on Owens

    Set-up

    Initalizing the system for use of the HDF5 library is dependent on the system you are using and the compiler you are using. To load the default serial HDF5 library, run the following command: module load hdf5-serial. To load a particular version, use module load hdf5-serial/version. For example, use module load hdf5-serial/1.10.5 to load HDF5 version 1.10.5. You can use module spider hdf5-serial to view available modules.

    Building With HDF5

    The HDF5 library provides the following variables for use at build time:

    Variable Use
    $HDF5_C_INCLUDE Use during your compilation step for C programs
    $HDF5_CPP_INCLUDE Use during your compilation step for C++ programs (serial version only)
    $HDF5_F90_INCLUDE Use during your compilation step for FORTRAN programs
    $HDF5_C_LIBS Use during your linking step programs
    $HDF5_F90_LIBS

    Use during your linking step for FORTRAN programs

    For example, to build the code myprog.c or myprog.f90 with the hdf5 library you would use:

    icc -c $HDF5_C_INCLUDE myprog.c
    icc -o myprog myprog.o $HDF5_C_LIBS
    ifort -c $HDF5_F90_INCLUDE myprog.f90
    ifort -o myprog myprog.o $HDF5_F90_LIBS
    

    Batch Usage

    When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Non-interactive Batch Job (Serial Run)
    A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Below is the example batch script that executes a program built with the HDF5 library:
    #PBS -N AppNameJob
    #PBS -l nodes=1:ppn=28
    module load hdf5
    cd $PBS_O_WORKDIR
    cp foo.dat $TMPDIR
    cd $TMPDIR
    appname
    cp foo_out.h5 $PBS_O_WORKDIR
    

    Usage on Pitzer

    Set-up

    Initalizing the system for use of the HDF5 library is dependent on the system you are using and the compiler you are using. To load the default serial HDF5 library, run the following command: module load hdf5-serial. To load a particular version, use module load hdf5-serial/version. For example, use module load hdf5-serial/1.10.5 to load HDF5 version 1.10.5. You can use module spider hdf5-serial to view available modules.

    Building With HDF5

    The HDF5 library provides the following variables for use at build time:

    VARIABLE USE
    $HDF5_C_INCLUDE Use during your compilation step for C programs
    $HDF5_CPP_INCLUDE Use during your compilation step for C++ programs (serial version only)
    $HDF5_F90_INCLUDE Use during your compilation step for FORTRAN programs
    $HDF5_C_LIBS Use during your linking step programs
    $HDF5_F90_LIBS

    Use during your linking step for FORTRAN programs

    For example, to build the code myprog.c or myprog.f90 with the hdf5 library you would use:

    icc -c $HDF5_C_INCLUDE myprog.c
    icc -o myprog myprog.o $HDF5_C_LIBS
    ifort -c $HDF5_F90_INCLUDE myprog.f90
    ifort -o myprog myprog.o $HDF5_F90_LIBS
    

    Batch Usage

    When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Non-interactive Batch Job (Serial Run)
    batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Below is the example batch script that executes a program built with the HDF5 library:
    #PBS -N AppNameJob
    #PBS -l nodes=1:ppn=28
    module load hdf5
    cd $PBS_O_WORKDIR
    cp foo.dat $TMPDIR
    cd $TMPDIR
    appname
    cp foo_out.h5 $PBS_O_WORKDIR

    Further Reading

    Supercomputer: 
    Service: 

    HISAT2

    HISAT2 is a graph-based alignment program that maps DNA and RNA sequencing reads to a population of human genomes.

    Availability and Restrictions

    Versions

    HISAT2 is available on the Owens Cluster. The versions currently available at OSC are:

    Version Owens Pitzer
    2.1.0 X* X*
    * Current Default Version

    You can use module spider hisat2 to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    HISAT2 is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    https://ccb.jhu.edu/software/hisat2, Open source

    Usage

    Usage on Owens and Pitzer

    Set-up

    To configure your enviorment for use of HISAT2, use command module load hisat2. This will load the default version.

    Further Reading

     
    Supercomputer: 
    Fields of Science: 

    HMMER

    HMMER is used for searching sequence databases for sequence homologs, and for making sequence alignments. It implements methods using probabilistic models called profile hidden Markov models (profile HMMS). HMMER is designed to detect remote homologs as sensitively as possible, relying on the strength of its underlying probability models.

    Availability and Restrictions

    Versions

    HMMER is available on the OSC clusters. These are the versions currently available:

    Version Owens Pitzer Notes
    3.3.2 X X  

     

    You can use module spider hmmer to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    HMMER is available to all OSC users.

    Publisher/Vendor/Repository and License Type

    Copyright (C) 2020 Howard Hughes Medical Institute.

    HMMER and its documentation are feely distributed under the 3-Clause BSD open source license. For a copy of the license, see opensource.org/licenses/BSD-3-Clause.

    Usage

    Usage on Owens

    Set-up on Owens

    HMMER usage is controlled via modules. To load the default version of HMMER module, use module load hmmer. To select a particular software version, use module load hmmer/version. For example, use module load hmmer/3.3.2 to load HMMER version 3.3.2 on Owens.

    Usage on Pitzer

    Set-up on Pitzer

    HMMER usage is controlled via modules. Load one of the HMMER module files at the command line, in your shell initialization script, or in your batch scripts. To load the default version of HMMER module, use module load hmmer.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    HOMER

    HOMER (Hypergeometric Optimization of Motif EnRichment) is a suite of tools for Motif Discovery and ChIP-Seq analysis. It is a collection of command line programs for unix-style operating systems written in mostly perl and c++. Homer was primarily written as a de novo motif discovery algorithm that is well suited for finding 8-12 bp motifs in large scale genomics data.

    Availability and Restrictions

    Versions

    The following versions of HOMER are available on OSC clusters:

    Version Owens
    4.8 X
    4.10 X*
    * Current default version

    You can use  module spider homer to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    HOMER is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Christopher Benner, Open source

    Usage

    HOMER data

    We maintain the HOMER data in a central location, which can be accessed and shared by all versions of HOMER. Current availabe data are listed below:

    Data Packages
    Organisms human-o v6.0, rat-o v6.0, mouse-o v6.3
    Genomes hg19 v6.0, rn5 v6.0, hg38 v6.0, nm10 v6.0
    Promoters mouse-p v5.5

    You can access the data via the environment variable $HOMER_DATA after loading the homer module. If you need other data, please contact OSC Help.

    Usage on Owens

    Set-up

    To configure your environment for use of HOMER, run the following command: module load homer. The default version will be loaded. To select a particular HOMER version, use module load homer/version. For example, use module load homer/4.10 to load HOMER 4.10.

    Access HOMER Genome Data

    Up-to-date HOMER genome data can be found in $HOMER_DATA/genomes. To use proper genome database with annotatePeaks.pl tool, you need to specify the path to the genomes directory, e.g.
     
    annotatePeaks.pl input.bed $HOMER_DATA/genomes/mm10 > output.txt
    
     
    To the appropriate genome for analyzing genomic motifs, you can specify the path to a file or directory containing the genomic sequence in FASTA format and specify a path for preparsed data:
     
    #!/bin/bash
    #SBATCH --job-name homer_data_test 
    #SBATCH --time=1:00:00
    #SBATCH --nodes=1 --ntasks-per-node=1
    #SBATCH --account=<project-account>
    
    cp output_test.fastq $TMPDIR
    
    module load homer/4.10
    
    cd $TMPDIR
    homerTools trim -3 GTCTTT -mis 1 -minMatchLength 4 -min 15 output_test.fastq
    
    sgather -pr $TMPDIR ${SLURM_SUBMIT_DIR}/sgather
     

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    HPC Toolkit

    HPC Toolkit is a collection of tools that measure a program's work, resource consumption, and inefficiency to analze performance.

    Availability and Restrictions

    Versions

    The following versions of HPC Toolkitare available on OSC clusters:

    Version Owens Pitzer
    5.3.2 X*  
    2018.09   X*
    * Current default version

    You can use module spider hpctoolkit to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    HPC Toolkit is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Rice Univerity, Open source

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of HPC Toolkit, run the following command: module load hpctoolkit. The default version will be loaded. To select a particular HPC Toolkit version, use module load hpctoolkit/version

    Usage on Pitzer

    Set-up

    To configure your environment for use of HPC Toolkit, run the following command: module load hpctoolkit. The default version will be loaded. To select a particular HPC Toolkit version, use module load hpctoolkit/version

    Further Reading

    Supercomputer: 

    HTSlib

    HTSlib is a C library used for reading and writing high-throughput sequencing data. HTSlib is the core library used by SAMtools. HTSlib also provides the bgziphtsfile, and tabix utilities.

    Availability and Restrictions

    Versions

    The versions of HTSlib currently available at OSC are:

    Version Owens Pitzer
    1.6 X*  
    1.11   X*
    1.16 X X
    * Current Default Version

    You can use module spider htslib to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    HTSlib is available to all OSC users.

    Publisher/Vendor/Repository and License Type

    Genome Research Ltd., Open source

    Usage

    Usage on Owens and Pitzer

    Set-up

    To configure your enviorment for use of HTSlib, use command module load htslib. This will load the default version.

    Further Reading

    Supercomputer: 

    Hadoop

    A hadoop cluster can be launched within the HPC environment, but managed by the PBS/slurm job scheduler using  Myhadoop framework developed by San Diego Supercomputer Center. (Please see https://www.grid.tuc.gr/fileadmin/users_data/grid/documents/hadoop/Krish...)

    Availability and Restrictions

    Versions

    The following versions of Hadoop are available on OSC systems: 

    Version Owens
    3.0.0-alpha1 X*
    * Current default version

    You can use module spider hadoop to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Hadoop is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Apache software foundation, Open source

    Usage

    Set-up

    In order to configure your environment for the usage of Hadoop, run the following command:

    module load hadoop

    In order to access a particular version of Hadoop, run the following command

    module load hadoop/3.0.0-alpha1
    

    Using Hadoop

    In order to run Hadoop in batch, reference the example batch script below. This script requests 6 node on the Owens cluster for 1 hour of walltime. 

    #!/bin/bash
    #SBATCH --job-name hadoop-example
    #SBATCH --nodes=6 --ntasks-per-node=28
    #SBATCH --time=01:00:00
    #SBATCH --account <account>
    
    export WORK=$SLURM_SUBMIT_DIR
    module load hadoop/3.0.0-alpha1
    module load myhadoop/v0.40
    export HADOOP_CONF_DIR=$TMPDIR/mycluster-conf-$SLURM_JOBID
    
    cd $TMPDIR
    
    myhadoop-configure.sh -c $HADOOP_CONF_DIR -s $TMPDIR
    $HADOOP_HOME/sbin/start-dfs.sh
    hadoop dfsadmin -report
    hadoop  dfs -mkdir data
    hadoop  dfs -put $HADOOP_HOME/README.txt  data/
    hadoop  dfs -ls data
    hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-alpha1.jar wordcount data/README.txt wordcount-out
    hadoop  dfs -ls wordcount-out
    hadoop  dfs  -copyToLocal -f  wordcount-out  $WORK
    $HADOOP_HOME/sbin/stop-dfs.sh
    myhadoop-cleanup.sh
    

    Example Jobs

    Please check /usr/local/src/hadoop/3.0.0-alpha1/test.osc folder for more examples of hadoop jobs

    Further Reading

    See Also

    Supercomputer: 
    Service: 
    Fields of Science: 

    Horovod

    "Horovod is a distributed training framework for TensorFlow, Keras, PyTorch, and MXNet. The goal of Horovod is to make distributed Deep Learning fast and easy to use. The primary motivation for this project is to make it easy to take a single-GPU TensorFlow program and successfully train it on many GPUs faster."

    Quote from Horovod Github documentation

    Installation

    Please follow the link for general instructions on installing Horovod for use with GPUs. The commands below assume a Bourne type shell; if you are using a C type shell then the "source activate" command may not work; in general, you can load all the modules, define any environment variables, and then type "bash" and execute the other commands.

    Step 1: Install NCCL 2

    Please download NCCL 2 from https://developer.nvidia.com/nccl (select OS agnostic local installer; Download NCCL 2.7.8, for CUDA 10.2, July 24,2020 was used in the latest test of this recipe).

    Add the nccl library path to LD_LIBRARY_PATH environment variable

    $ export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:Path_to_nccl/nccl-<version>/lib
    Step 2: Install horovod python package
    module load python/3.6-conda5.2

    Create a local python environment for a horovod installation with nccl and activate it

    conda create -n horovod-withnccl python=3.6 anaconda
    source activate horovod-withnccl

    Install a GPU version of tensorflow or pytorch

    pip install https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.10.0-cp36-cp36m-linux_x86_64.whl

    Load mvapich2 and cuda modules

    module load gnu/7.3.0  mvapich2-gdr/2.3.4 
    
    module load cuda/10.2.89

    Install the horovod python package

    HOROVOD_NCCL_HOME=/path_to_nccl_home/ HOROVOD_GPU_ALLREDUCE=NCCL pip install --no-cache-dir horovod

    Testing

    Please get the benchmark script from here.

    #!/bin/bash 
    #SBATCH --job-name R_ExampleJob 
    #SBATCH --nodes=2 --ntasks-per-node=48 
    #SBATCH --time=01:00:00 
    #SBATCH --account <account>
    
    module load python/3.6-conda5.2 
    module load cuda/10.2.89 
    module load gnu/7.3.0 
    module load mvapich2-gdr/2.3.4 
    source activate horovod-withnccl export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/path_to_nccl_home/lib mpiexec -ppn 1 -binding none -env NCCL_DEBUG=INFO python tf_cnn_benchmarks.py
    

     

    Feel free to contact OSC Help if you have any issues with installation.

    Publisher/Vendor/Repository and License Type

    https://eng.uber.com/horovod/, Open source

    Further Reading

    TensorFlow homepage

    Supercomputer: 
    Service: 
    Technologies: 
    Fields of Science: 

    Intel Compilers

    The Intel compilers for both C/C++ and FORTRAN.

    Availability and Restrictions

    Versions

    The versions currently available at OSC are:

    Version Owens Pitzer Ascend Notes
    16.0.3 X      
    16.0.8 X     Security update
    17.0.2 X      
    17.0.5 X      
    17.0.7 X X   Security update
    18.0.0 X      
    18.0.2 X      
    18.0.3 X X    
    18.0.4   X    
    19.0.3 X X    
    19.0.5 X* X*    
    19.1.3 X X    
    2021.3.0 X X   oneAPI compiler/library
    2021.4.0     X* oneAPI compiler/library
    2021.5.0   X X oneAPI compiler/library
    * Current Default Version

    You can use module spider intel  to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    The Intel Compilers are available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Intel, Commercial (state-wide)

    If you need the Intel compilers, tools, and libraries on your desktop or on your local clusters, Intel oneAPI is available without extra cost for most academic purposes: please read about Intel oneAPI.

    Usage

    Usage on Owens

    Set-up on Owens

    After you ssh to Owens, the default version of Intel compilers will be loaded for you automatically. 

    Using the Intel Compilers

    Once the intel compiler module has been loaded, the compilers are available for your use. See our compilation guide for suggestions on how to compile your software on our systems. The following table lists common compiler options available in all languages.

    COMPILER OPTION PURPOSE
    -c Compile only; do not link  
    -DMACRO[=value] Defines preprocessor macro MACRO with optional value (default value is 1)  
    -g  Enables debugging; disables optimization  
    -I/directory/name Add /directory/name to the list of directories to be searched for #include files  
    -L/directory/name Adds /directory/name to the list of directories to be searched for library files  
    -lname Adds the library libname.a or libname.so to the list of libraries to be linked  
    -o outfile Names the resulting executable outfile instead of a.out  
    -UMACRO Removes definition of MACRO from preprocessor  
    -v Emit version including gcc compatibility; see below  
      Optimization Options
    -O0 Disable optimization  
    -O1 Light optimization  
    -O2 Heavy optimization (default)  
    -O3 Aggressive optimization; may change numerical results  
    -ipo Inline function expansion for calls to procedures defined in separate files  
    -funroll-loops Loop unrolling  
    -parallel Automatic parallelization  
    -openmp Enables translation of OpenMP directives  

    The following table lists some options specific to C/C++

    -strict-ansi Enforces strict ANSI C/C++ compliance
    -ansi Enforces loose ANSI C/C++ compliance
    -std=val Conform to a specific language standard

    The following table lists some options specific to Fortran

    -convert big_endian Use unformatted I/O compatible with Sun and SGI systems
    -convert cray Use unformatted I/O compatible with Cray systems
    -i8 Makes 8-byte INTEGERs the default
    -module /dir/name Adds /dir/name to the list of directories searched for Fortran 90 modules
    -r8 Makes 8-byte REALs the default
    -fp-model strict Disables optimizations that can change the results of floating point calculations

    Intel compilers use the GNU tools on the clusters:  header files, libraries, and linker.  This is called the Intel and GNU compatibility and interoperability.  Use the Intel compiler option -v to see the gcc version that is currently specified.  Most users will not have to change this.  However, the gcc version can be controlled by users in several ways. 

    On OSC clusters the default mechanism of control is based on modules.  The most noticeable aspect of interoperability is that some parts of some C++ standards are available by default in various versions of the Intel compilers; other parts require you to load an extra module.  The C++ standard can be specified with the Intel compiler option -std=val; see the compiler man page for valid values of val.  If you specify a particular standard then load the corresponding module; the most common Intel compiler version and C++ standard combinations, that are applicable to this cluster, are described below:

    For the C++14 standard with an Intel 16 compiler:

    module load cxx14
    

    With an Intel 17 or 18 compiler, module cxx17 will be automatically loaded by the intel module load command to enable the GNU tools necessary for the C++17 standard.   With an Intel 19 compiler, module gcc-compatibility will be automatically loaded by the intel module load command to enable the GNU tools necessary for the C++17 standard.  (In early 2020 OSC changed the name of these GNU tool controlling modules to clarify their purpose and because our underlying implementation changed.)

    A symptom of broken gcc-compatibility is unusual or non sequitur compiler errors typically involving the C++ standard library especially with respect to template instantiation, for example:

        error: more than one instance of overloaded function "std::to_string" matches the argument list:
                  detected during:
                    instantiation of "..."
    
        error: class "std::vector<std::pair<short, short>, std::allocator<std::pair <short, short>>>" has no member "..."
                  detected during:
                    instantiation of "..."
    

    An alternative way to control compatibility and interoperability is with Intel compiler options; see the "GNU gcc Interoperability" sections of the various Intel compiler man pages for details.

    Batch Usage on Owens

    When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Interactive Batch Session
    For an interactive batch session on Owens, one can run the following command:
    sinteractive -A <project-account> -N 1 -n 28 -t 1:00:00
    
    which gives you 1 node with 28 cores ( -N 1 -n 28 ) with 1 hour ( -t 1:00:00 ). You may adjust the numbers per your need.
    Non-interactive Batch Job (Serial Run)
    batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. The following example batch script file will use the input file named  hello.c  and the output file named  hello_results . Below is the example batch script ( job.txt ) for a serial run:
    #!/bin/bash
    #SBATCH --time=1:00:00 
    #SBATCH --nodes=1 --ntasks-per-node=28 
    #SBATCH --job-name jobname 
    #SBATCH --account=<project-account>
    
    module load intel 
    cp hello.c $TMPDIR 
    cd $TMPDIR 
    icc -O2 hello.c -o hello 
    ./hello > hello_results 
    cp hello_results $SLURM_SUBMIT_DIR
    

    In order to run it via the batch system, submit the  job.txt  file with the following command:

    sbatch job.txt
    
    Non-interactive Batch Job (Parallel Run)
    Below is the example batch script ( job.txt ) for a parallel run:
    #!/bin/bash
    #SBATCH --time=1:00:00
    #SBATCH--nodes=2 --ntasks-per-node=40
    #SBATCH --job-name name
    #SBATCH --account=<project-account>
    
    module load intel
    mpicc -O2 hello.c -o hello
    cp hello $TMPDIR
    cd $TMPDIR
    mpiexec ./hello > hello_results
    cp hello_results $SLURM_SUBMIT_DIR
    

    Usage on Pitzer

    Set-up on Pitzer

    After you ssh to Pitzer, the default version of Intel compilers will be loaded for you automatically. 

    Using the Intel Compilers

    Once the intel compiler module has been loaded, the compilers are available for your use. See our compilation guide for suggestions on how to compile your software on our systems. The following table lists common compiler options available in all languages.

    COMPILER OPTION PURPOSE
    -c Compile only; do not link  
    -DMACRO[=value] Defines preprocessor macro MACRO with optional value (default value is 1)  
    -g  Enables debugging; disables optimization  
    -I/directory/name Add /directory/name to the list of directories to be searched for #include files  
    -L/directory/name Adds /directory/name to the list of directories to be searched for library files  
    -lname Adds the library libname.a or libname.so to the list of libraries to be linked  
    -o outfile Names the resulting executable outfile instead of a.out  
    -UMACRO Removes definition of MACRO from preprocessor  
    -v Emit version including gcc compatibility; see below
      Optimization Options
    -O0 Disable optimization  
    -O1 Light optimization  
    -O2 Heavy optimization (default)  
    -O3 Aggressive optimization; may change numerical results  
    -ipo Inline function expansion for calls to procedures defined in separate files  
    -funroll-loops Loop unrolling  
    -parallel Automatic parallelization  
    -openmp Enables translation of OpenMP directives  

    The following table lists some options specific to C/C++

    -strict-ansi Enforces strict ANSI C/C++ compliance
    -ansi Enforces loose ANSI C/C++ compliance
    -std=val Conform to a specific language standard

    The following table lists some options specific to Fortran

    -convert big_endian Use unformatted I/O compatible with Sun and SGI systems
    -convert cray Use unformatted I/O compatible with Cray systems
    -i8 Makes 8-byte INTEGERs the default
    -module /dir/name Adds /dir/name to the list of directories searched for Fortran 90 modules
    -r8 Makes 8-byte REALs the default
    -fp-model strict Disables optimizations that can change the results of floating point calculations

    Intel compilers use the GNU tools on the clusters:  header files, libraries, and linker.  This is called the Intel and GNU compatibility and interoperability.  Use the Intel compiler option -v to see the gcc version that is currently specified.  Most users will not have to change this.  However, the gcc version can be controlled by users in several ways. 

    On OSC clusters the default mechanism of control is based on modules.  The most noticeable aspect of interoperability is that some parts of some C++ standards are available by default in various versions of the Intel compilers; other parts require an extra module.  The C++ standard can be specified with the Intel compiler option -std=val; see the compiler man page for valid values of val.

    With an Intel 17 or 18 compiler, module cxx17 will be automatically loaded by the intel module load command to enable the GNU tools necessary for the C++17 standard.   With an Intel 19 compiler, module gcc-compatibility will be automatically loaded by the intel module load command to enable the GNU tools necessary for the C++17 standard.  (In early 2020 OSC changed the name of these GNU tool controlling modules to clarify their purpose and because our underlying implementation changed.)

    A symptom of broken gcc-compatibility is unusual or non sequitur compiler errors typically involving the C++ standard library especially with respect to template instantiation, for example:

        error: more than one instance of overloaded function "std::to_string" matches the argument list:
                  detected during:
                    instantiation of "..."
    
        error: class "std::vector<std::pair<short, short>, std::allocator<std::pair <short, short>>>" has no member "..."
                  detected during:
                    instantiation of "..."
    

    An alternative way to control compatibility and interoperability is with Intel compiler options; see the "GNU gcc Interoperability" sections of the various Intel compiler man pages for details.

     

    C++ Standard GNU Intel
    C++11 > 4.8.1 > 14.0
    C++14 > 6.1 > 17.0
    C++17 > 7 > 19.0
    C++2a features available since 8  

     

    Batch Usage on Pitzer

    When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Interactive Batch Session

    For an interactive batch session on Pitzer, one can run the following command:

    sinteractive -A <project-account> -N 1 -n 40 -t 1:00:00
    

    which gives you 1 node (-N 1), 40 cores ( -n 40), and 1 hour ( -t 1:00:00). You may adjust the numbers per your need.

    Non-interactive Batch Job (Serial Run)

    batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. The following example batch script file will use the input file named  hello.c  and the output file named  hello_results . Below is the example batch script ( job.txt ) for a serial run:

    #!/bin/bash
    #SBATCH --time=1:00:00
    #SBATCH --nodes=1 --ntasks-per-node=40
    #SBATCH --job-name hello
    #SBATCH --account=<project-account>
    
    module load intel
    cp hello.c $TMPDIR
    cd $TMPDIR
    icc -O2 hello.c -o hello
    ./hello > hello_results
    cp hello_results $SLURM_SUBMIT_DIR
    

    In order to run it via the batch system, submit the   job.txt  file with the following command:

    sbatch job.txt
    
    Non-interactive Batch Job (Parallel Run)

    Below is the example batch script ( job.txt ) for a parallel run:

    #!/bin/bash
    #SBATCH --time=1:00:00
    #SBATCH --nodes=2 --ntasks-per-node=40
    #SBATCH --job-name name
    #SBATCH --account=<project-account>
    
    module load intel
    module laod intelmpi
    mpicc -O2 hello.c -o hello
    cp hello $TMPDIR
    cd $TMPDIR
    sun ./hello > hello_results
    cp hello_results $SLURM_SUBMIT_DIR
    

    Further Reading

    See Also

    Supercomputer: 
    Service: 
    Technologies: 
    Fields of Science: 

    Intel MPI

    Intel's implementation of the Message Passing Interface (MPI) library. See Intel Compilers for available compiler versions at OSC.

    Availability and Restrictions

    Versions

    Intel MPI may be used as an alternative to - but not in conjunction with - the MVAPICH2 MPI libraries. The versions currently available at OSC are:

    Version Owens Pitzer Ascend
    5.1.3 X    
    2017.2 X    
    2017.4 X X  
    2018.0 X    
    2018.3 X X  
    2018.4   X  
    2019.3 X X  
    2019.7 X* X*  
    2021.3 X X  
    2021.4.0     X*
    2021.5   X  
    2021.10 X X X
    2021.11     X
    * Current Default Version

    You can use module spider intelmpi to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Intel MPI is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Intel, Commercial

    Usage

    Usage on Owens

    Set-up on Owens

    To configure your environment for the default version of Intel MPI, use module load intelmpi. To configure your environment for a specific version of Intel MPI, use module load intelmpi/<version>. For example, use module load intelmpi/2019.7 to load Intel MPI version 2019.7 on Owens.

    You can use module spider intelmpi to view available modules on Owens.

    Note: This module conflicts with the default loaded MVAPICH installations, and Lmod will automatically replace with the correct one when you use module load intelmpi.

    Using Intel MPI

    Software compiled against this module will use the libraries at runtime.

    Building With Intel MPI

    On Ruby, we have defined several environment variables to make it easier to build and link with the Intel MPI libraries.

    VARIABLE USE
    $MPI_CFLAGS Use during your compilation step for C programs.
    $MPI_CXXFLAGS Use during your compilation step for C++ programs.
    $MPI_FFLAGS Use during your compilation step for Fortran programs.
    $MPI_F90FLAGS Use during your compilation step for Fortran 90 programs.
    $MPI_LIBS Use when linking your program to Intel MPI.

    In general, for any application already set up to use mpicc, (or similar), compilation should be fairly straightforward.

    Batch Usage on Owens

    When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the multiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.

    Non-interactive Batch Job (Parallel Run)
    A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. The following example batch script file will run a program compiled against Intel MPI (called my-impi-application) for five hours on Owens:
    #!/bin/bash
    #SBATCH --job-name MyIntelMPIJob
    #SBATCH --nodes=4 --ntasks-per-node=28
    #SBATCH --time=5:00:00
    #SBATCH --account=<project-account>
    
    module load intelmpi
    srun my-impi-application
    

    Usage on Pitzer

    Set-up on Pitzer

    To configure your environment for the default version of Intel MPI, use module load intelmpi.
    Note: This module conflicts with the default loaded MVAPICH installations, and Lmod will automatically replace with the correct one when you use module load intelmpi.

    Using Intel MPI

    Software compiled against this module will use the libraries at runtime.

    Building With Intel MPI

    On Oakley, we have defined several environment variables to make it easier to build and link with the Intel MPI libraries.

    VARIABLE USE
    $MPI_CFLAGS Use during your compilation step for C programs.
    $MPI_CXXFLAGS Use during your compilation step for C++ programs.
    $MPI_FFLAGS Use during your compilation step for Fortran programs.
    $MPI_F90FLAGS Use during your compilation step for Fortran 90 programs.
    $MPI_LIBS Use when linking your program to Intel MPI.

    In general, for any application already set up to use mpicc compilation should be fairly straightforward.

    Batch Usage on Pitzer

    When you log into pitzer.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the multiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.

    Non-interactive Batch Job (Parallel Run)
    A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. The following example batch script file will run a program compiled against Intel MPI (called my-impi-application) for five hours on Pitzer:
    #!/bin/bash
    #SBATCH --job-name MyIntelMPIJob
    #SBATCH --nodes=2 --ntasks-per-node=48
    #SBATCH --time=5:00:00
    #SBATCH --account=<project-account>
    
    module load intelmpi
    srun my-impi-application

    Usage on Ascend

    Set-up on Ascend

    To configure your environment for the default version of Intel MPI, use module spider intelmpi to check what module(s) to load first. Use module load [module name and version] to load what modules you need, then use module load intelmpi to load the default intelmpi.
    Note: This module conflicts with the default loaded MVAPICH installations, and Lmod will automatically replace with the correct one when you use module load intelmpi.

    Using Intel MPI

    Software compiled against this module will use the libraries at runtime.

    Building With Intel MPI

    On Oakley, we have defined several environment variables to make it easier to build and link with the Intel MPI libraries.

    VARIABLE USE
    $MPI_CFLAGS Use during your compilation step for C programs.
    $MPI_CXXFLAGS Use during your compilation step for C++ programs.
    $MPI_FFLAGS Use during your compilation step for Fortran programs.
    $MPI_F90FLAGS Use during your compilation step for Fortran 90 programs.
    $MPI_LIBS Use when linking your program to Intel MPI.

    In general, for any application already set up to use mpicc compilation should be fairly straightforward.

    Batch Usage on Ascend

    When you log into ascend.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the multiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.

    Non-interactive Batch Job (Parallel Run)
    A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. The following example batch script file will run a program compiled against Intel MPI (called my-impi-application) for five hours on Ascend:
    #!/bin/bash
    #SBATCH --job-name MyIntelMPIJob
    #SBATCH --nodes=2 --ntasks-per-node=48
    #SBATCH --time=5:00:00
    #SBATCH --account=<project-account>
    
    module load intelmpi
    srun my-impi-application

    Known Issues

    A partial-node MPI job failed to start using mpiexec

    Update: October 2020
    Version: 2019.3 2019.7

    A partial-node MPI job may fail to start using mpiexec from intelmpi/2019.3 and intelmpi/2019.7 with error messages like

    [mpiexec@o0439.ten.osc.edu] wait_proxies_to_terminate (../../../../../src/pm/i_hydra/mpiexec/intel/i_mpiexec.c:532): downstream from host o0439 was killed by signal 11 (Segmentation fault)
    [mpiexec@o0439.ten.osc.edu] main (../../../../../src/pm/i_hydra/mpiexec/mpiexec.c:2114): assert (exitcodes != NULL) failed
    
    /var/spool/torque/mom_priv/jobs/11510761.owens-batch.ten.osc.edu.SC: line 30: 11728 Segmentation fault  
    
    /var/spool/slurmd/job00884/slurm_script: line 24:  3180 Segmentation fault      (core dumped)
    

    If you are using Slurm, make sure the job has CPU resource allocation using #SBATCH --ntasks=N instead of

    #SBATCH --nodes=1
    #SBATCH --ntasks-per-node=N
    

    If you are using PBS, please use Intel MPI 2018 or intelmpi/2019.3 with the module libfabric/1.8.1.

    Using mpiexec/mpirun with Slurm

    Update: October 2020
    Version: 2017.x 2018.x 2019.x

    Intel MPI on Slurm batch system is configured to support PMI process manager. It is recommended to use srun as MPI program launcher. If you prefer using mpiexec/mpirun over Hydra process manager with Slurm,  please add following code to the batch script before running any MPI executable:

    unset I_MPI_PMI_LIBRARY I_MPI_HYDRA_BOOTSTRAP
    export I_MPI_JOB_RESPECT_PROCESS_PLACEMENT=0   # the option -ppn only works if you set this before

     

    MPI-IO issues on home directories

    Update: May 2020
    Version: 2019.3
    Certain MPI-IO operations with intelmpi/2019.3 may crash, fail or proceed with errors on the home directory. We do not expect the same issue on our GPFS file system, such as the project space and the scratch space. The problem might be related to the known issue reported by HDF5 group. Please read the section "Problem Reading A Collectively Written Dataset in Parallel" from HDF5 Known Issues for more detail.

    Further Reading

    See Also

    Java

    Java is a concurrent, class-based, object-oriented programming language.

    Availability and Restrictions

    Versions

    The following versions of Java are available on OSC clusters:

    Version Owens Pitzer Note
    1.7.0 X    
    1.8.0_131 X* X* The same version as system Java
    11.0.8 X X  
    12.0.2 X X  
    * Current default version

    You can use module spider java to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Java is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Oracle, Freeware

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of Java, run the following command: module load java. The default version will be loaded. To select a particular Java version, use module load java/version

    Usage on Pitzer

    Set-up

    To configure your environment for use of Java, run the following command: module load java. The default version will be loaded. To select a particular Java version, use module load java/version

    Further Reading

    Supercomputer: 

    Julia

    From julialang.org:

    "Julia is a high-level, high-performance dynamic programming language for numerical computing. It provides a sophisticated compiler, distributed parallel execution, numerical accuracy, and an extensive mathematical function library. Julia’s Base library, largely written in Julia itself, also integrates mature, best-of-breed open source C and Fortran libraries for linear algebra, random number generation, signal processing, and string processing. In addition, the Julia developer community is contributing a number of external packages through Julia’s built-in package manager at a rapid pace. IJulia, a collaboration between the Jupyter and Julia communities, provides a powerful browser-based graphical notebook interface to Julia."

    Availability and Restrictions

    Versions

    Julia is available on all the clusters. The versions currently available at OSC are:

    Version Owens Pitzer Notes
    0.5.1  X    
    0.6.4 X    
    1.0.0 X X  
    1.0.5 X* X*  
    1.1.1 X X  
    1.3.1 X X  
    1.5.3 X X  
    1.6.5 X X  
    1.6.7 X X  
    1.8.5 X X  
    *:Current default version

    You can use module spider julia to view available modules for a given cluster. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Julia is available for use by all OSC users.

    Publisher/Vendor/Repository and License Type

    Jeff Bezanson et al., Open source

    Usage 

    Interactive Julia Notebooks

    If you are using OnDemand, you can simply work with Jupyter and the selection of the Julia kernel to use interactive notebooks to work on an Owens or Pitzer compute node!

    Navigate to ondemand.osc.edu and select a Jupyter notebook:

    Jupyter Notebook


    Install Julia kernel for Jupyter

    Since version 1.0, OSC users must manage their own IJulia kernels in Jupyter. The following is an example of adding the latest version of IJulia and creating the corresponding version of the Julia kernel:

    $ module load julia/1.0.5
    $ create_julia_kernel
    Installing IJulia
     Resolving package versions...
      Updating `~/.julia/environments/v1.0/Project.toml`
      [7073ff75] + IJulia v1.23.2
      Updating `~/.julia/environments/v1.0/Manifest.toml`
    ...
    ...
    IJulia installed: 1.23.2
    [ Info: Installing Julia kernelspec in /users/PAS1234/username/.local/share/jupyter/kernels/julia-1.0
    

    In Juptyer Notebook, you can find the item Julia 1.0.5 in the kernel list:

    Screen Shot 2021-08-19 at 12.46.48 AM.png

    For more detail about package management, please refer to the Julia document

    Acess gurobi from Jupyter Notebook

     To acess gurobi from Jupyter notebook, users would need to request access for the Gurobi software. More information can be found at Gurobi webpage. User would need to set the path to the gurobi license file located on Owens in the notebook as follows,

    ENV["GRB_LICENSE_FILE"] = "/usr/local/gurobi/10.0.1/gurobi.lic"

     

     

     
    Supercomputer: 
    Service: 
    Fields of Science: 

    Kallisto

    Kallisto is an RNA-seq quantification program. It quantifies abundances of transcripts from RNA-seq data and uses psedoalignment to determine the compatibility of reads with targets, without needing alignment.

    Availability and Restrictions

    Versions

    Kallisto is available on the Owens Clusters. The versions currently available at OSC are:

    Version Owens
    0.43.1 X*
    * Current Default Version

    You can use module spider kallisto to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Kallisto is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Nicolas Bray et al., Open source

    Usage

    Usage on Owens

    Set-up

    To configure your enviorment for use of Salmon, use command module load kallisto. This will load the default version.

    Further Reading

    Supercomputer: 
    Fields of Science: 

    LAMMPS

    The Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a classical molecular dynamics code designed for high-performance simulation of large atomistic systems.  LAMMPS generally scales well on OSC platforms, provides a variety of modeling techniques, and offers GPU accelerated computation.

    Availability and Restrictions

    Versions

    LAMMPS is available on all clusters. The following versions are currently installed at OSC:

    Version Owens Pitzer Ascend
    14May16 P    
    31Mar17 PC    
    16Mar18 PC    
    22Aug18 PC PC  
    5Jun19 PC PC  
    3Mar20 PC* PC*  
    29Oct20 PC PC  
    29Sep2021.3 PC PC PC*
    20220623.1     PC
    * Current default version; S = serial executables; P = parallel; C = CUDA
    *  IMPORTANT NOTE: You must load the correct compiler and MPI modules before you can load LAMMPS. To determine which modules you need, use module spider lammps/{version}.  Some LAMMPS versions are available with multiple compiler versions and MPI versions; in general, we recommend using the latest versions. (In particular, mvapich2/2.3.2 is recommended over 2.3.1 and 2.3; see the known issue.

    You can use module spider lammps  to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    LAMMPS is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Sandia National Lab., Open source

    Usage

    Usage on Owens

    Set-up

    To load the default version of LAMMPS module and set up your environment, use  module load lammps . To select a particular software version, use module load lammps/version . For example, use  module load lammps/14May16  to load LAMMPS version 14May16. 

    Using LAMMPS

    Once a module is loaded, LAMMPS can be run with the following command:
    lammps < input.file
    

    To see information on the packages and executables for a particular installation, run the module help command, for example:

    module help lammps
    

    Batch Usage

    By connecting to owens.osc.edu you are logged into one of the login nodes which has computing resource limits. To gain access to the manifold resources on the cluster, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Interactive Batch Session

    For an interactive batch session one can run the following command:

    sinteractive -A <project-account> -N 1 -n 28 -g 1 -t 00:20:00 
    

    which requests one whole node with 28 cores ( -N 1 -n 28), for a walltime of 20 minutes ( -t 00:20:00 ), with one gpu (-g 1). You may adjust the numbers per your need.

    Non-interactive Batch Job (Parallel Run)

    A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Sample batch scripts and LAMMPS input files are available here:

    ~srb/workshops/compchem/lammps/
    

    Below is a sample batch script. It asks for 56 processors and 10 hours of walltime. If the job goes beyond 10 hours, the job would be terminated.

    #!/bin/bash
    #SBATCH --job-name=chain  
    #SBATCH --nodes=2 --ntasks-per-node=28  
    #SBATCH --time=10:00:00  
    #SBATCH --account=<project-account>
    
    module load lammps  
    sbcast -p chain.in $TMPDIR/chain.in
    cd $TMPDIR  
    lammps < chain.in  
    sgather -pr $TMPDIR $SLURM_SUBMIT_DIR/output
    

    Usage on Pitzer

    Set-up

    To load the default version of LAMMPS module and set up your environment, use  module load lammps

    Using LAMMPS

    Once a module is loaded, LAMMPS can be run with the following command:
    lammps < input.file
    

    To see information on the packages and executables for a particular installation, run the module help command, for example:

    module help lammps
    

    Batch Usage

    To access a cluster's main computational resources, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Interactive Batch Session

    For an interactive batch session one can run the following command:

    sinteractive -A <project-account> -N 1 -n 48 -g 1 -t 00:20:00 
    

    which requests one whole node with 28 cores ( -N 1 -n 48), for a walltime of 20 minutes ( -t 00:20:00 ), with one gpu (-g 1). You may adjust the numbers per your need.

    Non-interactive Batch Job (Parallel Run)

    batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Sample batch scripts and LAMMPS input files are available here:

    ~srb/workshops/compchem/lammps/
    

    Below is a sample batch script. It asks for 56 processors and 10 hours of walltime. If the job goes beyond 10 hours, the job would be terminated.

    #!/bin/bash
    #SBATCH --job-name=chain 
    #SBATCH --nodes=2 --ntasks-per-node=48 
    #SBATCH --time=10:00:00 
    #SBATCH --account=<project-account>
    
    module load lammps 
    sbcast -p chain.in $TMPDIR/chain.in
    cd $TMPDIR 
    lammps < chain.in 
    sgather -pr $TMPDIR $SLURM_SUBMIT_DIR/output

    Known Issues

    LAMMPS 14May16 velocity command problem on Owens

    Updated: December 2016
    Versions Affected: LAMMPS 14May16
    LAMMPS 14May16 on Owens can hang when using the velocity command. Inputs that hang on Owens work on Oakley and Ruby. LAMMPS 31Mar17 on Owens also works. Here is an example failing input snippet:
    velocity mobile create 298.0 111250 mom yes dist gaussian run 1000

    Further Reading

    Supercomputer: 
    Service: 

    LAPACK

    LAPACK (Linear Algebra PACKage) provides routines for solving systems of simultaneous linear equations, least-squares solutions of linear systems of equations, eigenvalue problems, and singular value problems.

    Availability and Restrictions

    A highly optimized implementation of LAPACK is available on all OSC clusters as part of the Intel Math Kernel Library (MKL). We recommend that you use MKL rather than building LAPACK for yourself. MKL is available to all OSC users.

    Publisher/Vendor/Repository and License Type

    http://www.netlib.org/lapack/, Open source

    Usage

    See OSC's MKL software page for usage information. Note that there are lapack shared libraries on the clusters; however, these are old versions from the operating system and should generally not be used.  You should modify your makefile or build script to link to the MKL libraries instead; a quick start for a crude approach is to merely load an mkl module and substitute the consequently defined environment variable $(MKL_LIBS) for -llapack.

    Further Reading

    Supercomputer: 
    Service: 
    Technologies: 
    Fields of Science: 

    LS-DYNA

    LS-DYNA is a general purpose finite element code for simulating complex structural problems, specializing in nonlinear, transient dynamic problems using explicit integration. LS-DYNA is one of the codes developed at Livermore Software Technology Corporation (LSTC).

    Availability and Restrictions

    Versions

    LS-DYNA is available on Owens Cluster for both serial (smp solver for single node jobs) and parallel (mpp solver for multipe node jobs) versions. The versions currently available at OSC are:

    Version Owens

    9.0.1

    smp  X
    mpp X
    10.1.0 smp X
    mpp X
    11.0.0 smp X*
    mpp X*
    12.1.0 smp X
    mpp X
    * Current default version

    You can use module spider ls-dyna to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access for Academic Users

    ls-dyna is available to academic OSC users with proper validation. In order to obtain validation, please contact OSC Help for further instruction.

    Access for Commercial Users

    Contact OSC Help for getting access to LS-DYNA if you are a commercial user.

    Publisher/Vendor/Repository and License Type

    LSTC, Commercial

    Usage

    Usage on Owens

    Set-up on Owens

    To view available modules installed on Owens, use  module spider ls-dyna for smp solvers, and use  module spider mpp for mpp solvers. In the module name, '_s' indicates single precision and '_d' indicates double precision. For example, mpp-dyna/971_d_9.0.1 is the mpp solver with double precision on Owens. Use  module load name to load LS-DYNA with a particular software version. For example, use  module load mpp-dyna/971_d_9.0.1 to load LS-DYNA mpp solver version 9.0.1 with double precision on Owens.

    Batch Usage on Owens

    When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.  Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.

    Interactive Batch Session

    For an interactive batch session one can run the following command:

    sinteractive -A <project-account> -N 1 -n 28 -t 00:20:00 -L lsdyna@osc:28
    
    which requests one whole node with 28 cores (-N 1 -n 28), for a walltime of 20 minutes (-t 00:20:00). You may adjust the numbers per your need.
    Non-interactive Batch Job (Serial Run)

    batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Please follow the steps below to use LS-DYNA via the batch system:

    1) copy your input files (explorer.k in the example below) to your work directory at OSC

    2) create a batch script, similar to the following file, saved as job.txt. It uses the smp solver for a serial job (nodes=1) on Owens:

    #!/bin/bash
    #SBATCH --job-name=plate_test
    #SBATCH --time=5:00:00
    #SBATCH --nodes=1 --ntasks-per-node=28
    #SBATCH --account <project-account>
    #SBATCH -L lsdyna@osc:28
    
    # The following lines set up the LSDYNA environment
    module load ls-dyna/971_d_9.0.1
    #
    # Run LSDYNA (number of cpus > 1)
    #
    
    lsdyna I=explorer.k NCPU=28 
    
    

     3) submit the script to the batch queue with the command: sbatch job.txt.

     When the job is finished, all the result files will be found in the directory where you submitted your job ($SLURM_SUBMIT_DIR). Alternatively, you can submit your job from the temporary directory ($TMPDIR), which is faster to access for the system and might be beneficial for bigger jobs. Note that $TMPDIR is uniquely associated with the job submitted and will be cleared when the job ends. So you need to copy your results back to your work directory at the end of your script. 

    Non-interactive Batch Job (Parallel Run)
    Please follow the steps below to use LS-DYNA via the batch system:

    1) copy your input files (explorer.k in the example below) to your work directory at OSC

    2) create a batch script, similar to the following file, saved as job.txt). It uses the mmp solver for a parallel job (nodes>1) on Owens:

    #!/bin/bash
    #SBATCH --job-name=plate_test 
    #SBATCH --time=5:00:00 
    #SBATCH --nodes=2 --ntasks-per-node=28 
    #SBATCH --account <project-account>
    #SBATCH -L lsdyna@osc:56
    
    # The following lines set up the LSDYNA environment
    module load intel/18.0.3
    module load intelmpi/2018.3
    module load mpp-dyna/971_d_9.0.1
    
    #
    # Run LSDYNA (number of cpus > 1)
    #
    srun mpp971 I=explorer.k NCPU=56 
    
    

     3) submit the script to the batch queue with the command: sbatch job.txt.

    When the job is finished, all the result files will be found in the directory where you submitted your job ($SLURM_SUBMIT_DIR). Alternatively, you can submit your job from the temporary directory ($TMPDIR), which is faster to access for the system and might be beneficial for bigger jobs. Note that $TMPDIR is uniquely associated with the job submitted and will be cleared when the job ends. So you need to copy your results back to your work directory at the end of your script. An example scrip should include the following lines:

    ...
    cd $TMPDIR
    sbcast $SLURM_SUBMIT_DIR/explorer.k explorer
    ... #launch the solver and execute
    sgather -pr $TMPDIR ${SLURM_SUBMIT_DIR}
    #or you may specify a directory for your output files, such as
    #sgather -pr $TMPDIR ${SLURM_SUBMIT_DIR}/output
    

    Further Reading

    See Also

    Supercomputer: 
    Service: 

    LS-OPT

    LS-OPT is a package for design optimization, system identification, and probabilistic analysis with an interface to LS-DYNA.

    Availability and Restrictions

    Versions

    The following versions of ls-opt are available on OSC clusters:

    Version Owens
    6.0.0 X*
    * Current default version

    You can use module spider ls-opt to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    In order to use LS-OPT, you need LS-DYNA. ls-dyna is available to academic OSC users with proper validation. In order to obtain validation, please contact OSC Help for further instruction.

    Publisher/Vendor/Repository and License Type

    LSTC, Commercial

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of LS-OPT, run the following command: module load ls-opt. The default version will be loaded. To select a particular LS-OPT version, use module load ls-opt/version. For example, use module load ls-opt/6.0.0 to load LS-OPT 6.0.0.

    Further Reading

    Supercomputer: 
    Service: 

    LS-PrePost

    LS-PrePost is an ad­vanced pre and post-proces­sor that is de­liv­ered free with LS-DY­NA.

    Availability and Restrictions

    Versions

    The following versions of ls-prepost are available on OSC clusters:

    Version Owens
    4.6 X*
    * Current default version

    You can use module spider ls-prepost to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    In order to use LS-PrePost you need LS-DYNA. ls-dyna is available to academic OSC users with proper validation. In order to obtain validation, please contact OSC Help for further instruction.

    Publisher/Vendor/Repository and License Type

    LSTC, Commercial

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of LS-PrePost, run the following command: module load ls-prepost. The default version will be loaded. To select a particular LS-PrePost version, use module load ls-prepost/<version>. For example, use module load ls-prepost/4.6 to load LS-PrePost 4.6.

    Further Reading

    Supercomputer: 
    Service: 

    User-Defined Material for LS-DYNA

    This page describes how to specify user defined material to use within LS-DYNA.  The user-defined subroutines in LS-DYNA allow the program to be customized for particular applications.  In order to define user material, LS-DYNA must be recompiled.

    Usage

    The first step to running a simulation with user defined material is to build a new executable. The following is an example done with solver version mpp971_s_R7.1.1.

    When you log into the Oakley system, load mpp971_s_R7.1.1 with the command:

    module load mpp-dyna/R7.1.1

    Next, copy the mpp971_s_R7.1.1 object files and Makefile to your current directory:

    cp /usr/local/lstc/mpp-dyna/R7.1.1/usermat/* $PWD

    Next, update the dyn21.f file with your user defined material model subroutine. Please see the LS-DYNA User's Manual (Keyword version) for details regarding the format and structure of this file.

    Once your user defined model is setup correctly in dyn21.f, build the new mpp971 executable with the command:

    make

    To execute a multi processor (ppn > 1) run with your new executable, execute the following steps:

    1) move your input file to a directory on an OSC system (pipe.k in the example below)

    2) copy your newly created mpp971 executable to this directory as well

    3) create a batch script (lstc_umat.job) like the following:

    #PBS -N LSDYNA_umat
    #PBS -l walltime=1:00:00
    #PBS -l nodes=2:ppn=8
    #PBS -j oe
    #PBS -S /bin/csh
    
    # This is the template batch script for running a pre-compiled
    # MPP 971 v7600 LS-DYNA.  
    # Total number of processors is ( nodes x ppn )
    #
    # The following lines set up the LSDYNA environment
    module load mpp-dyna/R7.1.1
    #
    # Move to the directory where the job was submitted from
    # (i.e. PBS_O_WORKDIR = directory where you typed qsub)
    #
    cd $PBS_O_WORKDIR
    #
    # Run LSDYNA 
    # NOTE: you have to put in your input file name
    #
    mpiexec mpp971 I=pipe.k NCPU=16

              4) Next, submit this job to the batch queue with the command:

           qsub lstc_umat.job

    The output result files will be saved to the directory you ran the qsub command from (known as the $PBS_O_WORKDIR_

    Documentation

    On-line documentation is available on LSTC website.

    See Also

     

     

    Service: 

    MAGMA

    MAGMA is a collection of next generation linear algebra (LA) GPU accelerated libraries designed and implemented by the team that developed LAPACK and ScaLAPACK. MAGMA is for heterogeneous GPU-based architectures, it supports interfaces to current LA packages and standards, e.g., LAPACK and BLAS, to allow computational scientists to effortlessly port any LA-relying software components. The main benefits of using MAGMA are that it can enable applications to fully exploit the power of current heterogeneous systems of multi/manycore CPUs and multi-GPUs, and deliver the fastest possible time to an accurate solution within given energy constraints.

    Availability and Restrictions

    Versions

    MAGMA is available on Owens, and the following versions are currently available at OSC:

    Version Owens
    2.2.0 X(I)*
    * Current default version; I = avaible with only intel

    You can use module spider magma to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access for Academic Users

    MAGMA is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Computational Algebra Group, Univ. of Sydney, Open source

    Usage

    Usage on Owens

    Set-up

    To load the default version of MAGMA module, cuda must first be loaded. Use module load cuda toload the default version of cuda, or module load cuda/version to load a specific version. Then use module load magma to load MAGMA. To select a particular software version, use module load magma/version. For example, use   module load magma/2.2.0 to load MAGMA version 2.2.0

    Using MAGMA

    To run MAGMA in the command line, use the Intel compilers (icc, ifort). 

    icc $MAGMA_CFLAGS example.c
    

    or

    ifort $MAGMA_F90FLAGS example.F90
    

    Batch Usage

    When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your MAGMA simulation to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Interactive Batch Session
    For an interactive batch session, one can run the following command:
    sinteractive -A <account> -N 1 -n 28 -t 1:00:00
    
    which gives you 1 node and 28 cores (-N 1 -n 28) with 1 hour (-t 1:00:00). You may adjust the numbers per your need.
    Non-interactive Batch Job (Serial Run)

    batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice.

    Below is the example batch script (job.txt) for a serial run:

    #!/bin/bash
    ## MAGMA Example Batch Script for the Basic Tutorial in the MAGMA manual
    #SBATCH --job-name=6pti
    #SBATCH --nodes=1 --ntasks-per-node=28
    #SBATCH --time=0:20:00
    #SBATCH --account <account>
    
    module load cuda
    module load magma
    # Use TMPDIR for best performance.
    cd $TMPDIR
    # SLURM_SUMIT_DIR refers to the directory from which the job was submitted.
    cp $SLURM_SUMIT_DIR/example.c .
    icc $MAGMA_CFLAGS example.c
    

    In order to run it via the batch system, submit the job.txt file with the command: sinteractive job.txt

    Further Reading

    Tag: 
    Supercomputer: 
    Service: 
    Technologies: 
    Fields of Science: 

    MATLAB

    MATLAB is a technical computing environment for high-performance numeric computation and visualization. MATLAB integrates numerical analysis, matrix computation, signal processing, and graphics in an easy-to-use environment where problems and solutions are expressed just as they are written mathematically--without traditional programming.

    Availability and Restrictions

    Versions

    MATLAB is available on Pitzer and Owens Clusters. The versions currently available at OSC are:

    Version Owens Pitzer Notes
    r2015b X    
    r2016b X    
    r2017a X    
    r2018a X X  
    r2018b X X  
    r2019a   X  
    r2019b X X  
    r2020a X* X*  
    r2021b X X  
    r2022a X X  
    r2023a X X  
    r2023b X X  
    r2024a X X  
    * Current default version

    You can use module spider matlab to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access: Academic Users Only (non-commercial, non-government)

    Academic users can use Matlab at OSC. All users must be added to the license server before using MATLAB. Please contact OSC Help to be granted access or for any license related questions.

    Publisher/Vendor/Repository and License Type

    MathWorks, Commercial (University site license)

    Toolboxes and Features

    OSC's current licenses support the following MATLAB toolboxes and features (please contact OSC Help for license-specific questions):

    MATLAB
    Simulink
    5G Toolbox
    AUTOSAR Blockset
    Aerospace Blockset
    Aerospace Toolbox
    Antenna Toolbox
    Audio Toolbox
    Automated Driving Toolbox
    Bioinformatics Toolbox
    Communications Toolbox
    Computer Vision Toolbox
    Control System Toolbox
    Curve Fitting Toolbox
    DDS Blockset
    DSP System Toolbox
    Data Acquisition Toolbox
    Database Toolbox
    Datafeed Toolbox
    Deep Learning HDL Toolbox
    Deep Learning Toolbox
    Econometrics Toolbox
    Embedded Coder
    Filter Design HDL Coder
    Financial Instruments Toolbox
    Financial Toolbox
    Fixed-Point Designer
    Fuzzy Logic Toolbox
    GPU Coder
    Global Optimization Toolbox
    HDL Coder
    HDL Verifier
    Image Acquisition Toolbox
    Image Processing Toolbox
    Instrument Control Toolbox
    LTE Toolbox
    Lidar Toolbox
    MATLAB Coder
    MATLAB Compiler SDK
    MATLAB Compiler
    MATLAB Report Generator
    Mapping Toolbox
    Mixed-Signal Blockset
    Model Predictive Control Toolbox
    Model-Based Calibration Toolbox
    Motor Control Blockset
    Navigation Toolbox
    OPC Toolbox
    Optimization Toolbox
    Parallel Computing Toolbox
    Partial Differential Equation Toolbox
    Phased Array System Toolbox
    Powertrain Blockset
    Predictive Maintenance Toolbox
    RF Blockset
    RF PCB Toolbox
    RF Toolbox
    ROS Toolbox
    Radar Toolbox
    Reinforcement Learning Toolbox
    Risk Management Toolbox
    Robotics System Toolbox
    Robust Control Toolbox
    Satellite Communications Toolbox
    Sensor Fusion and Tracking Toolbox
    SerDes Toolbox
    Signal Integrity Toolbox
    Signal Processing Toolbox
    SimBiology
    SimEvents
    Simscape Driveline
    Simscape Electrical
    Simscape Fluids
    Simscape Multibody
    Simscape
    Simulink 3D Animation
    Simulink Check
    Simulink Code Inspector
    Simulink Coder
    Simulink Compiler
    Simulink Control Design
    Simulink Coverage
    Simulink Design Optimization
    Simulink Design Verifier
    Simulink Desktop Real-Time
    Simulink PLC Coder
    Simulink Real-Time
    Simulink Report Generator
    Simulink Requirements
    Simulink Test
    SoC Blockset
    Spreadsheet Link
    Stateflow
    Statistics and Machine Learning Toolbox
    Symbolic Math Toolbox
    System Composer
    System Identification Toolbox
    Text Analytics Toolbox
    UAV Toolbox
    Vehicle Dynamics Blockset
    Vehicle Network Toolbox
    Vision HDL Toolbox
    WLAN Toolbox
    Wavelet Toolbox
    Wireless HDL Toolbox

    See this page if you need to install additional toolbox by yourself. 

    Usage

    Usage on Owens

    Set-up

    To load the default version of MATLAB module, use  module load matlab . For a list of all available MATLAB versions and the format expected, type:  module spider matlab . To select a particular software version, use   module load matlab/version . For example, use  module load matlab/r2015b  to load MATLAB version r2015b. 

    Running MATLAB

    The following command will start an interactive, command line version of MATLAB:

    matlab -nodisplay 
    
    If you are able to use X-11 forwarding and have enabled it in your SSH client software preferences, you can run MATLAB using the GUI by typing the command  matlab . For more information about the matlab command usage, type  matlab –h  for a complete list of command line options.

    The commands listed above will run MATLAB on the login node you are connected to. As the login node is a shared resource, running scripts that require significant computational resources will impact the usability of the cluster for others. As such, you should not use interactive MATLAB sessions on the login node for any significant computation. If your MATLAB script requires significant time, CPU power, or memory, you should run your code via the batch system.

    Batch Usage

    When you log into owens.osc.edu you are actually logged into a Linux box referred to as the login node. To gain access to the multiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Interactive Batch Session
    For an interactive batch session using the command line version of MATLAB, one can run the following command:
    sinteractive -A <project-account> -N 1 -n 28 -t 00:20:00
    

    which requests one whole node with 28 cores ( -N 1 -n 28 ), for a walltime of 20 minutes ( -t 00:20:00 ). Here you can run MATLAB interactively by loading the MATLAB module and running MATLAB with the options of your choice as described above. You may adjust the numbers per your need.

    Usage on Pitzer

    Set-up

    To load the default version of MATLAB module, use module load matlab.

    Running MATLAB

    The following command will start an interactive, command line version of MATLAB:

    matlab -nodisplay 
    
    If you are able to use X-11 forwarding and have enabled it in your SSH client software preferences, you can run MATLAB using the GUI by typing the command  matlab. For more information about the matlab command usage, type  matlab –h for a complete list of command line options.

    The commands listed above will run MATLAB on the login node you are connected to. As the login node is a shared resource, running scripts that require significant computational resources will impact the usability of the cluster for others. As such, you should not use interactive MATLAB sessions on the login node for any significant computation. If your MATLAB script requires significant time, CPU power, or memory, you should run your code via the batch system.

    Batch Usage

    When you log into pitzer.osc.edu you are actually logged into a Linux box referred to as the login node. To gain access to the multiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Interactive Batch Session
    For an interactive batch session using the command line version of MATLAB, one can run the following command:
    sinteractive -A <project-account> -N 1 -n 40 -t 00:20:00
    

    which requests one whole node with 40 cores ( -N 1 -n 40), for a walltime of 20 minutes ( -t 00:20:00 ). Here you can run MATLAB interactively by loading the MATLAB module and running MATLAB with the options of your choice as described above. You may adjust the numbers per your need.

    Additional Topics

    MATLAB Parallel Functions and Tools

    MATLAB now supports Parallel Computing Toolbox. The Parallel Computing Toolbox lets you solve computationally and data-intensive programs using multiple cores and GPUs. Built in MATLAB functions and tools allow for easy parallelization of MATLAB applications. Programs can be run both interactively or as batch jobs.

    Currently only r2019b and newer versions have full support for the Parallel Computing Toolbox on Owens and Pitzer. 

    Please refer to the official MATLAB documentation for more information on the Parallel Computing Toolbox

    Sections:

    Creating Parallel Pools

    You can parallelize by requesting a certain number of workers and then work can be offloaded onto those pool of workers. For local computations, the number of workers you can requests relates to the number of cores available.


    To start up a pool you can run:

    p = gcp

    p is the pool object which can be used to check information on the worker pool.

    By default gcp creates a pool of workers equal to the number of cores on the job.

    Note:

    • It may takes a couple of seconds to a minute to start up a pool.
    • You cannot run multiple parallel pools at the same time on a single job.


    To delete the current pool if one exists run:

    delete(gcp('nocreate')

    After the program is done running the pool will still remain active. MATLAB only deletes the pool after the default 30 minutes. So if you want to end a pool you must manually delete it, let MATLAB timeout the pool. or terminate the job. If you make changes to the code interactively it is recommended you delete the pool and spin up a new pool of workers.


    See Matlab documentation for more information on worker pools  here

     

    Parpool and Batch

    Parallel jobs can also be submitted by a Matlab script, as is demonstrated below in the Submitting Single-Node Parallel MATLAB Jobs and Submitting Multi-Node Parallel MATLAB Jobs sections. The 2 main ways of doing so is through parpool and batch.

    First, before using parpool or batch, you must get a handle to the profile cluster. To do this use the parcluster function.

    % creates cluster profile object for the specified cluster profile
    c = parcluster("Cluster_Profile");
    
    % creates a cluster object to your current job
    c = parcluster("local");
    

    See the Submitting Multi-Node Parallel MATLAB Jobs section below for more information on how to create a cluster profile.

    Creating the object with a cluster profile will result in a new job to be submitted when launching parpool or batch. Make sure the appropriate arguments are set in the cluster profile. Creating the object with the 'local' parameter will not result in a new job to be launched when executing parpool or batch. Instead the workers will be allocated to the cores in your current job. 

    Once you have you a profile object created, you can now launch parallel jobs.

    To launch a parpool parallel job, simply run:

    p = parpool(c, 40);
    % c: is the cluster profile object initialized using parcluster
    % 40: because we want 40 workers
    

    Important Note: You can only run one parpool job at a time. You need to make sure the parent job which launched the parpool job has a long enough wall time to accommodate the new job otherwise the parpool job will get terminated when the parent job ends.

    To launch a batch job:

    job1 = batch(c, @function, 1, {"arg1", "arg2"}, "Pool", 40); % launch batch job of 40 workers
    % c: is the cluster profile object initialized using parcluster
    
    wait(job1); % wait for job to finish
    
    X = fetchOutputs(job1); %retrieve the output data from job
    
    %job detail can be accessed by the job1 object including its status.
    

    Here we launched a batch job to exectute @function. @function will be run on a parallel pool of 40 workers. 

    Since batch does not block up your matlab program, you wan use the wait function to wait for your batch job(s) to finish before proceeding. The fetchOutputs function can be used to retrieve the outputs of the batch job.

     

    The notable difference between parpool and batch is that you can run multiple batch jobs at a time and their duration is not tied to the parent job (the parent job can finish executing and the batch jobs will continue executing unlike parpool).

    Please See the Running Concurrent Jobs section if running multiple jobs at the same time

    please refer to the official MATLAB documentation for more details: parcluster  parpool  batch

     

    Parfor

    To parallelize a for-loop you can use a parfor-loop. 

    A parfor-loop will run the different iterations of the loop in parallel by assigning the iterations to the workers in the pool. If multiple jobs are assigned to a worker then those jobs will be competed in serial by the worker. It is important to carefully assess and make good judgment calls on how many workers you want to request for the job.


    To utilize a parfor-loop simply replace the for in a standard for-loop with parfor

    %converting a standard for loop to a parfor looks as such:
    for i=1:10
        %loop code
    end
    
    %replace the for with parfor
    parfor i=1:10
        %loop code
    end
    

     

    Important note: parfor may complete the iterations out of order, so it is important that the iterations are not order dependent.

    A parfor-loop is run synchronously, thus the MATLAB process is halted until all tasks for the workers are settled.

    Important Limitations:

    • Cannot nest parfors inside of one another: This is because workers cannot start or access further parallel pools.
      • parfor-loops and for-loops can be nested inside one another (it is often a judgment call on whether it is better to nest a parfor inside a for-loop or vice versa).
    %valid
    for i=1:10
       parfor j=1:10
          %code
       end
    end
    
    %invalid: will throw error
    parfor i=1:10
       parfor j=1:10
          %code
       end
    end
    
    • Cannot have loop elements dependent on other iterations
      • Since, there is not guaranteed order of completion of iterations in a parfor-loop and workers cannot communicate with each other, each loop iteration must be independent.
      A = ones(1,100);
      parfor i = 1:100
           A(i) = A(i-1) + 1; %invalid iteration entry as the current iteration is dependent on the previous iteration
      end
    
    • step size must be 1
      parfor i = 0:0.1:1 %invalid because step side is not 1
           %code
      end
    

    To learn more about par-for loops see the official matlab parfor documentation

     

    Parfeval

    Another way to run loops in parallel in MATLAB is to use parfeval loops. When using parfeval to run functions in the background it creates an object called a future for each function and adds the future object to a poll queue.

    First, initialize a futures object vector with the number of expected futures. Preallocation of the futures vector is not required, but is highly recommended to increase efficiency: f(1:num_futures) = parallel.FevalFuture;

    For each job, you can fill the futures vector with an instance of the future. Filling the vector allows you to get access to the futures later. f(index) = parfeval(@my_function, numOutputs, input1, input2);

    • @my_function is the pointer to the function I want to run
    • numOutputs is the integer represented number of returned outputs you need from my_function. Note: this does not need to match the actual number for outputs the function returns.
    • input1, input2, ... is the parameter list for my_function
    %example code
    f(1:10) = parallel.FevalFuture;
    for i = 1:10
       f(i) = parfeval(@my_function, 1, 2);
    end
    

    when a future is created, it is added to a queue. Then the workers will takes futures from the queue to begin to evaluate them.

    you can use the state property of a future to find out whether it is queuedrunning, or finishedf(1).State

    you can manually cancel a future by running: cancel(f(1));

    you can block off MATLAB until a certain future complete by using: wait(f(4));

    when a future is finished you can check its error message is one was thrown by: f(1).Error.message

    You can cancel all running and/or queued futures by (p is the parallel pool object):

    cancel(p.FevalQueue.QueuedFutures);
    cancel(p.FevalQueue.RunningFutures);

    Processing worker outputs as they complete

    One of the biggest strengths of parfeval  is its ability to run futures asynchronously (runs in the background without blocking the Matlab program). This allows you to fetch results from the futures as they get completed.

    p = gcp; %luanch parallel pool with number of workers equal to availble cores
    
    f(1:10) = parallel.FevalFuture; % initalize futures vector
    
    for k = 1:10
        f(k) = parfeval(@rand, 1, 1000, 1); % lanch 10 futures which will run in background on parallel pool
    end
    
    results = cell(1,10); % create a results vector
    
    for k = 1:10
        [completedK, value] = fetchNext(f); % fetch the next worker that finished and print its results
        results{completedK} = value;
        fprintf("got result with index: %d, largest element in vector is %f. \n", completedK, max(results{completedK}));
    end
    

    In this example above, as each  @rand  future gets completed by the workers, the fetchNext retrieves the returned data. 

    MATLAB also provides functions such at afterEach and afterAll to process the outputs as workers complete futures.


    Please refer to the official MATLAB documentation for more information on parfeval: parfeval and parfeval parallel pooling

    Spmd

    spmd stands for Single Program Multiple Data. The spmd block can be used to execute multiple blocks of data across multiple workers. Here is a simple example:

    delete(gcp('nocreate')); %delete a parallel pool if one is already spun up
    p = parpool(2); %create a pool of 2 workers
    
    spmd
        fprintf("worker %d says hello world", spmdIndex); %have each worker print statement
    end
    %end of code
    
    %output
    Worker 1:
      worker 1 says hello world
    Worker 2:
      worker 2 says hello world
    %end of output
    

    The spmdIndex variable can be used to access the index of each worker. spmd also allows for communication between workers via sending and receiving data. Additionally, data can be received by the MATLAB client from the workers. For more information on spmd and its functionality vist the Official MATLAB documentation

    Submitting Single-Node Parallel MATLAB Jobs

    When parallelizing on a single node, you can generate and run a parallel pool on the same node as the current job or interactive secession. 

    Here is an example MATLAB script of submitting a parallel job to a single node:

    p = parcluster('local');
    
    % open parallel pool of 8 workers on the cluster node
    parpool(p, 8);
    
    spmd
       % assign each worker a print function
       fprintf("Worker %d says Hello", spmdIndex);
    end
    
    delete(gcp); % close the parallel pool
    exit
    

    Since we will only be using a single node, we will use the 'local' cluster profile. This will create a profile object p which will be the cluster profile of the job the command was run in. We also set the pool size be less than or equal to the number of cores on our compute node; In this case we will used 8. See cluster specifications to see the maximum number of cores on a single node for each cluster.

    Now lets save this MATLAB script as "wokrer_hello.m" and write a Slurm batch script to submit and execute it as a job. "worker_hello.slurm" slurm script:

    #!/bin/bash
    #SBATCH --job-name=worker_hello         # job name
    #SBATCH --cpus-per-task=8               # 8 cores
    #SBATCH --output=worker_hello.log       # set output file
    #SBATCH --time=00:10:00                 # 10 minutes wall time
    
    # load Matlab module
    module load matlab/r2023a
    
    cd $SLURM_SUBMIT_DIR
    #run matlab script
    matlab -nodisplay -r worker_hello

    In this script first we set a MATLAB module to the module path, in this example its MATLAB/r2023a. Then we make a call to execute the "worker_hello.m" MATLAB script. The -nodisplay flag is to prevent matlab from attempting to launch a GUI. In this script we requested 8 cores since our MATLAB script uses 8 workers. When performing single node parallelizations be mindful of the max number of cores each node has on the different clusters.

    Then the job was submitted using sbatch -A <project-account> worker_hello.slurm through the command line. 

    The output was then generated into the "worker_hello.log" file:

                              < M A T L A B (R) >
                     Copyright 1984-2023 The MathWorks, Inc.
                R2023a Update 2 (9.14.0.2254940) 64-bit (glnxa64)
                                  April 17, 2023
                                  
    To get started, type doc.
    For product information, visit www.mathworks.com.
    
    Starting parallel pool (parpool) using the 'Processes' profile ...
    Connected to parallel pool with 8 workers.
    
    Worker 1:
      Worker 1 says Hello
    Worker 2:
      Worker 2 says Hello
    Worker 3:
      Worker 3 says Hello
    Worker 4:
      Worker 4 says Hello
    Worker 5:
      Worker 5 says Hello
    Worker 6:
      Worker 6 says Hello
    Worker 7:
      Worker 7 says Hello
    Worker 8:
      Worker 8 says Hello
      
    Parallel pool using the 'Processes' profile is shutting down.
    

    As we see a total of 8 workers were created and each printed their message in parallel.

    Create Cluster Profile

    Before we can parallelize matlab across multiple nodes we need to create a cluster profile. In the profile we can specify any arguments and adjust the settings of submitting jobs through MATLAB.

    If you are running matlab r2019b and newer you can run configCluster to configure matlab with the profile of the cluster your job is running on:

    configCluster % configer matlab with profile
    
    c = parcluster; % get a handle to cluster profile
    
    
    % set any additional properties
    
    c.AdditionalProperties.WallTime = '00:10:00'; % set wall time to 10 mintues
    
    c.AdditionalProperties.AccountName = 'PZS1234' % set account name
    
    
    c.saveProfile % locally save the profile
    When creating a profile you must set the AccountName and WallTime and make sure to save the profile. 

     

    If the above method does not work, or you prefer to to use the GUI, then you can configure a cluster profile from the GUI. You must be running MATLAB r2023a and newer versions to be able to search for OSC's clusters.

    1. First we need to launch a Matlab GUI through onDemand. See onDemand for more details.

    2. Next within the MATLAB GUI, navigate to HOME->Environment->Parallel->Discover Clusters:

     

    IMG_1.jpeg

    3. Then check the "On your network" box. Then click Next.

    IMG_2.png

    4. If you started the Matlab GUI though onDemand then you should see the cluster of the session listed as such (I started mine through Pitzer so Pitzer is listed):

    IMG_3.png

    5. Now select the cluster and click Next. You should now have a screen like this:

    IMG_4.png

    6. Now check the "Set new cluster profile as default" box and then click Finish

    7. Now if you click on HOME->Environment->Parallel->Select Parallel Environment you will be presented with a list of profiles available which you can toggle between. Your new profile that was just created should be listed. 

    IMG_5.png

    8. Now we need to edit the cluster profile to suit the needs of the job we want to submit. Go to HOME->Environment->Parallel->Create and Manage Clusters. Select the cluster profile you want to edit and then click edit. Most settings can be left as default but the following must be set: the AccountName and WallTime under the SCHEDULER PLUGIN must be set to your account name:

    IMG_6.jpeg

    If you want MATLAB to submit jobs with slurm parameters other than the default you may edit them in this menu.

    When creating a profile you must set the AccountName and WallTime 

    Validating Profile

    If you run into any issues using your cluster profile you may want to validate your profile. Validating is not required, but may help debug any profile related issues.

    To validate a profile:

    1. Within the MATLAB GUI, navigate to HOME->Environment->Parallel->Create and Manage Clusters:Screenshot 2023-08-11 at 12.07.04 PM.jpeg
    2. Select the profile you want to validate on the left side of the menu. Then select the Validation tab next to the Properties tab. Now in the Number of worker to use: box specifiy the number of cores you are using to run the OnDemand MATLAB GUI on. If you leave the box blank, then it will run the tests with more workers then cores available to your matlab session which will result in a failed validation.Screenshot 2023-08-11 at 12.11.01 PM.jpeg
    3. Next, click validate in the bottom right or top of the menu Screenshot 2023-08-11 at 12.23.05 PM.png

    Make sure the AccountName and WallTime are both set in the cluster profile before validating! Make sure the number of worker used for validation is less than or equal to the number of cores available to the MATLAB session!

    Submitting Multi-Node Parallel MATLAB Jobs

    Before Submitting multi-node parallel jobs you must create a cluster profile. See Create Cluster Profile section above.

    Now let's create a submit a multi-node parallel MATLAB job. Here is a matlab script:

    configCluster % configer matlab with profile 
    
    p = parcluster; % get a handle to cluster profile 
    
    % set any additional properties 
    p.AdditionalProperties.WallTime = '00:10:00'; % set wall time to 10 mintues 
    
    p.AdditionalProperties.AccountName = 'PZS1234' % set account name 
    
    p.saveProfile % locally save the profile
    
    % if profile created using the "Discover Clusters" from the GUI then you can simply run: p = parcluster('Pitzer'); instead of the above code.
    
    
    % open parallel pool of 80 workers
    parpool(p, 80); % you must specify the number of workers you want
    
    spmd
       fprintf("Worker %d says Hello", spmdIndex);
    end
    
    delete(gcp); % close the parallel pool
    exit
    

    In this example we opened a cluster profile called 'Pitzer'. This profile name should be the same as the cluster profile created above. We then launched another job using the parpool function with 80 workers onto the Pitzer cluster with the default settings (wall-time was set to 1 minutes instead of the default 1 hour). Since 80 workers is over the maximum number of cores per node, the Pitzer profile created using the steps above will automatically request 2 nodes for the job to accomodate the workers.

    This script was saved in a file called "hello_multi_node.m".

    Now a slurm script was created as follows:

    #!/bin/bash
    #SBATCH --job-name=hello_multi_node     # job name
    #SBATCH --cpus-per-task=1               # 1 cores
    #SBATCH --output=hello_multi_node.log   # set output file
    #SBATCH --time=00:10:00                 # 10 minutes wall time
    
    # load Matlab module
    module load matlab/r2023a
    
    cd $SLURM_SUBMIT_DIR
    #run matlab script
    matlab -nodisplay -r hello_multi_node
    

    This job was allocated only 1 core. This is because the "hello_multi_node.m" will launch another job on the Pitzer cluster when calling parpool to exectute the parallel workers. Since the main entry matlab program does not need multiple nodes, we only allocated 1. 

    Then the job was submitted using sbatch -A <project-account> hello_multi_node.slurm through the command line. 

    The output was then generated into the "hello_multi_node.log" file:

                               < M A T L A B (R) >
                     Copyright 1984-2023 The MathWorks, Inc.
                R2023a Update 2 (9.14.0.2254940) 64-bit (glnxa64)
                                  April 17, 2023
    
    To get started, type doc.
    For product information, visit www.mathworks.com.
    
    Starting parallel pool (parpool) using the 'Pitzer' profile ...
    
    additionalSubmitArgs =
    
       '--ntasks=80 --cpus-per-task=1 --ntasks-per-node=40 -N 2 --ntasks-per-core=1 -A PZS0711 --mem-per-cpu=4gb -t 00:01:00'
    
    Connected to parallel pool with 80 workers.
    Worker  1:
      Worker 1 says hello
    Worker  2:
      Worker 2 says hello
    Worker  3:
      Worker 3 says hello
    Worker  4:
      Worker 4 says hello
    Worker  5:
      Worker 5 says hello
    .
    .
    .
    Worker  80:
      Worker 80 says hello
    

    Notice by the additionalSubmitArgs = line another job was launched with 2 nodes with 40 cores on each node. It is in this new job that the workers completed their tasks.

    In this example we used parpool to launch a new parallel job, but batch can also be used. See MATLAB Parallel Functions and Tools for more information on the batch function

    You can modify the properties of a cluster profile through code aswell through the c.AdditionalProperties attribute. This is helpful if you want to submit multiple batch jobs through a single Matlab program with different submit arguments.

    c = parcluster('Pitzer'); % get cluster object
    
    c.AdditionalProperties.WallTime = "00:15:00"; % sets the wall time to the c cluster object. Does not change the 'Pitzer' profile itself, only the local object.
    
    c.saveProfile; % saves to the central 'Pitzer' profile.
    

    Multithreading

    Multithreading allows some functions in MATLAB to distribute the work load between cores of the node that your job is running on. By default, all of the current versions of MATLAB available on the OSC clusters have multithreading enabled. 

    The system will run as many threads as there are cores on the nodes requested.

    Multithreading increases the speed of some linear algebra routines, but if you would like to disable multithreading you may include the option " -singleCompThread" when running MATLAB. An example is given below:

    #!/bin/bash
    #SBATCH --job-name disable_multithreading
    #SBATCH --time=00:10:00
    #SBATCH --nodes=1 --ntasks-per-node=40
    #SBATCH --account=<project-account>
    
    module load matlab
    matlab -singleCompThread -nodisplay -nodesktop < hello.m
    # end of example file
    

    Using GPU in MATLAB

    A GPU can be utilized for MATLAB. You can acquire a GPU for the job by

    #SBATCH --gpus-per-node=1

    for Owens, or Pitzer. For more detail, please read here.

    You can check the GPU assigned to you using:

    gpuDeviceCount  # show how many GPUs you have
    gpuDevice       # show the details of the GPU
    

    To utilize a GPU, you will need to transfer the data from a standard CPU array to a GPU array. gpuArrays are a data structure which is stored on the GPU. Make sure the GPU has enough memory to hold this data. Even if the gpuArray fits in the GPU memory, make sure that any temporary arrays and data generated will also be able to fit on the GPU.

    To create a GPU array:

    X = [1,2,3]; %create a standard array
    G = gpuArray(X); %transfer array over to gpu

    To check if data is stored on the GPU run:

    isgpuarray(G); %returns true or false
    

    To transfer the GPU data back onto the host memory use:

    Y = gather(G);
    


    Note:

    • To reduce overhead time, limit the amount of data transfers between the host memory and GPU memory. For instance, many MATLAB functions allow you to create data directly on the GPU by specify the "gpuArray" parameter: gpu_matrix = rand(N, N, "gpuArray");
    • Gathering data from gpuArrays can be costly in terms of time and thus it is generally not necessary to gather the data unless you need to store it or the data needs processing to through non-gpu compatible functions.

    When you have data in a GPU Array there are many built-in MATLAB functions which can run on the data. See list on the MATLAB website for a full list of compatible functions.

    For more information about GPU programming for MATLAB, please read GPU Computing from Mathworks.

    Running Concurrent Jobs

    Concurrent jobs on OSC clusters

    When you run multiple jobs concurrently, each job will try to access your preference files at the same time. It may create a race condition issue and may cause a negative impact on the system and the failure of your jobs. In order to avoid this issue, please add the following in your job script:

    export MATLAB_PREFDIR=$TMPDIR

    It will reset the preference directory to the local temporary directory, $TMPDIR. If you wish to start your Matlab job with the preference files you already have, add the following before you change MATLAB_PREFDIR.

    cp -a ~/.matlab/{matlab version}/* $TMPDIR/

    If you use matlab/r2020a, your matlab version is "R2020a".

    References

    Supercomputer: 
    Service: 
    Fields of Science: 

    SPM

    SPM is made freely available to the [neuro]imaging community, to promote collaboration and a common analysis scheme across laboratories. The software represents the implementation of the theoretical concepts of Statistical Parametric Mapping in a complete analysis package.

    The SPM software is a suite of MATLAB (MathWorks) functions and subroutines with some externally compiled C routines. SPM was written to organise and interpret our functional neuroimaging data. The distributed version is the same as that we use ourselves.

    Availability and Restrictions

    Versions

    The following versions are available on OSC clusters:

    VERSION

    Pitzer
    8

    X

    12.7771 X*

    * Current default version

    spm/12.7771 comes with CONN 0.19 and xjview 9.7

    spm/8 comes with CONN 0.19, xjview 9.7, and Marsbar 0.44

    You can use module spider spm to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    SPM is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    SPM is free but copyright software, distributed under the terms of the GNU General Public Licence as published by the Free Software Foundation (either version 2, as given in file LICENCE.txt, or at your option, any later version). Further details on "copyleft" can be found at https://www.gnu.org/copyleft/. In particular, SPM is supplied as is. No formal support or maintenance is provided or implied.

    Usage

    Usage on Pitzer

    Set-up

    To configure your environment for use of SPM, run the following command: module load spm. The default version will be loaded. To select a particular AFNI version, use module load spm/version. For example, use module load spm/12.7771 to load SPM/12.7771.

    SPM is a MATLAB suite, so you need to load MATLAB before you can use SPM:

    module load matlab/r2020a
    module load spm/12.7771
    or
    module load matlab/r2020a
    module load spm/8
      

    Note that spm/12.7771 comes with CONN 0.19 and xjview 9.7, and spm/8 comes with CONN 0.19, xjview 9.7, and Marsbar 0.44. Marsbar 0.44 doesn't support spm/12.7771.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    MIRA

    MIRA - Sequence assembler and sequence mapping for whole genome shotgun and EST / RNASeq sequencing data. Can use Sanger, 454, Illumina and IonTorrent data. PacBio: CCS and error corrected data usable, uncorrected not yet.

    Availability and Restrictions

    Versions

    The following versions of MIRA are available on OSC clusters:

    Version Owens
    4.0.2 X*
    * Current default version

    You can use module spider mira to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    MIRA is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Bastien Chevreux, Open source

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of MIRA, run the following command: module load mira. The default version will be loaded. To select a particular MIRA version, use module load mira/version. For example, use module load mira/4.0.2 to load MIRA 4.0.2.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    MKL - Intel Math Kernel Library

    Intel Math Kernel Library (MKL) consists of high-performance, multithreaded mathematics libraries for linear algebra, fast Fourier transforms, vector math, and more.

    Availability and Restrictions

    Versions

    OSC supports single-process use of MKL for LAPACK and BLAS levels one through three. For multi-process applications, we also support the ScaLAPACK, FFTW2, and FFTW3 MKL wrappers. MKL modules are available for the Intel, GNU, and PGI compilers. MKL is available on Pitzer, Ruby, and Owens Clusters. The versions currently available at OSC are:

    Version Owens Pitzer Notes
    11.3.2 X    
    11.3.3 X    
    2017.0.2 X    
    2017.0.4 X    
    2017.0.7   X  
    2018.0.3 X X  
    2019.0.3 X X  
    2019.0.5 X* X*  
    2021.3.0 X X  
    * Current Default Version

    You can use module spider mkl to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    MKL is available to all OSC users.

    Publisher/Vendor/Repository and License Type

    Intel, Commercial

    Usage

    Usage on Owens

    Set-up

    To load the default MKL, run the following command: module load mkl. To load a particular version, use  module load mkl/version. For example, use module load mkl/11.3.3 to load MKL version 11.3.3. You can use module spider mkl to view available modules.

    This step is required for both building and running MKL applications. Note that loading an mkl module defines several environment variables that can be useful for compiling and linking to MKL, e.g., MKL_CFLAGS and MKL_LIBS.

    Exception: The "mkl" module is usually not needed when using the Intel compilers; just use the "-mkl" flag on the compile and link steps.

    Usage on Pitzer

    Set-up

    To load the default MKL, run the following command: module load mkl

    This step is required for both building and running MKL applications.  Note that loading an mkl module defines several environment variables that can be useful for compiling and linking to MKL, e.g., MKL_CFLAGS and MKL_LIBS.

    Exception: The "mkl" module is usually not needed when using the Intel compilers; just use the "-mkl" flag on the compile and link steps.

    Dynamic Linking Variables

    These variables indicate how to link to MKL.  While their contents are used during compiling and linking, the variables themselves are usually specified during the configuration stage of software installation.  The form of specification is dependent on the application software.  For example, some softwares employing cmake for configuration might use this form:

    cmake ..  -DMKL_INCLUDE_DIR="$MKLROOT/include"  -DMKL_LIBRARIES="MKL_LIBS_SEQ" 

    Here is an exmple for some software employing autoconf:

    ./configure --prefix=$HOME/local/pkg/version CPPFLAGS="$MKL_CFLAGS" LIBS="$MKL_LIBS" LDFLAGS="$MKL_LIBS"

     

    Variable Comment
    MKL_LIBS Link with parallel threading layer of MKL
    GNU_MKL_LIBS Dedicated for GNU compiler in Intel programming environment
    MKL_LIBS_SEQ Link with sequential threading layer of MKL
    MKL_SCALAPACK_LIBS Link with BLACS and ScaLAPACK of MKL
    MKL_CLUSTER_LIBS Link with BLACS, CDFT and ScaLAPACK of MKL

    Further Reading

    Tag: 
    Supercomputer: 
    Service: 

    MRIQC

    MRIQC is a program that provides automatic prediction of quality and visual reporting of MRI scans.

    Availability and Restrictions

    Versions

    The following versions are available on OSC clusters:

    Version Pitzer
    0.16.1 X*
    23.1.0rc0 X
    * Current default version

    You can use module spider mriqc to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    MRIQC is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    MRIQC uses the 3-clause BSD license; the full license is in the file LICENSE in the mriqc distribution. Open-source.

    All trademarks referenced herein are property of their respective holders.

    Copyright (c) 2015-2017, the mriqc developers and the CRN. All rights reserved.

    Usage

    Usage on Pitzer

    Set-up

    To configure your environment for use of mriqc, run the following command: module load mriqc. The default version will be loaded. To select a particular MRIQC version, use module load mriqc/version. For example, use module load mriqc/0.16.1 to load MRIQC 0.16.1.

    MRIQC is installed in a singularity container.  MRIQC_IMG environment variable contains the container image file path. So, an example usage would be

    module load mriqc
    singularity exec $MRIQC_IMG mriqc --version
    

    For more information about singularity usages, please read OSC singularity page.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    MRIcroGL

    MRIcroGL is medical image viewer that allows you to load overlays (e.g. statistical maps), draw regions of interest (e.g. create lesion maps).

    Availability and Restrictions

    Versions

    MRIcroGL is available on Pitzer cluster. These are the versions currently available:

    Version Pitzer Notes
    1.2.20220720 X*  

    * Current default version

    You can use module spider mricrogl to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    mricrogl is available to all OSC users. Please review the license before you use. 

    Publisher/Vendor/Repository and License Type

    The Software has been developed for research purposes only and is not a clinical tool.

    Copyright (c) 2014-2019 Chris Rorden. All rights reserved.

    See more about the license for MRIcroGL at the GitHub repository here.

    Usage

    Usage on Pitzer

    Set-up

    To configure your environment for use of MRIcroGL, run the following command:  module load mricrogl. The default version will be loaded.

    MRIcroGL is a GUI based software, so it requires an x11 connection. You can read about it from here for more details, but the simplest way to access the GUI is by using the OnDemand portal. Once you have an x11 connection, you can open the GUI by doing the following:

    $ module load mricrogl
    $ mricrogl.sif
    

    MRIcroGL is installed in an apptainer container. For more information about apptainer usages, please read OSC apptainer page.

    Further Reading

    Supercomputer: 

    MUSCLE

    MUSCLE is a program for creating multiple alignments of protein sequences.

    Availability and Restrictions

    Versions

    The following versions of bedtools are available on OSC clusters:

    Version Owens Pitzer
    3.8.31 X* X*
    * Current default version

    You can use module spider muscle to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    MUSCLE is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Public domain software.

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of metilene, run the following command: module load muscle. The default version will be loaded. To select a particular MUSCLE version, use module load muscle/version. For example, use module load muscle/3.8.31 to load MUSCLE 3.8.31.

    Usage on Pitzer

    Set-up

    To configure your environment for use of metilene, run the following command: module load muscle. The default version will be loaded. To select a particular MUSCLE version, use module load muscle/version. For example, use module load muscle/3.8.31 to load MUSCLE 3.8.31.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    MVAPICH2

    MVAPICH2 is a standard library for performing parallel processing using a distributed-memory model. 

    Availability and Restrictions

    Versions

    The following versions of MVAPICH2 are available on OSC systems:

    Version Owens Pitzer Ascend
    2.3 X X  
    2.3.1 X X  
    2.3.2 X X  
    2.3.3 X* X*  
    2.3.5 X X  
    2.3.6 X X X
    2.3.7     X*
    * Current default version

    You can use module spider mvapich2 to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    MPI is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    NBCL, The Ohio State University/ Open source 

    Usage

    Set-up

    To set up your environment for using the MPI libraries, you must load the appropriate module:

    module load mvapich2
    

    You will get the default version for the compiler you have loaded.

    Note:Be sure to swap the intel compiler module for the gnu module if you're using gnu.

    Building With MPI

    To build a program that uses MPI, you should use the compiler wrappers provided on the system. They accept the same options as the underlying compiler. The commands are shown in the following table.

    C mpicc
    C++ mpicxx
    FORTRAN 77 mpif77
    Fortran 90 mpif90

    For example, to build the code my_prog.c using the -O2 option, you would use:

    mpicc -o my_prog -O2 my_prog.c
    

    In rare cases you may be unable to use the wrappers. In that case you should use the environment variables set by the module.

    Variable Use
    $MPI_CFLAGS Use during your compilation step for C programs.
    $MPI_CXXFLAGS Use during your compilation step for C++ programs.
    $MPI_FFLAGS Use during your compilation step for Fortran 77 programs.
    $MPI_F90FLAGS Use during your compilation step for Fortran 90 programs.
    $MPI_LIBS Use when linking your program to the MPI libraries.

    For example, to build the code my_prog.c without using the wrappers you would use:

    mpicc -c $MPI_CFLAGS my_prog.c
    
    mpicc -o my_prog my_prog.o $MPI_LIBS
    

    Batch Usage

    Programs built with MPI can only be run in the batch environment at OSC. For information on starting MPI programs using the srun or mpiexec command, see Batch Processing at OSC.

    Be sure to load the same compiler and mvapich modules at execution time as at build time.

    Known Issues

    Large MPI job startup failure

    Updated: Nov 2019
    Versions Affected: Mvapich2/2.3 & 2.3.1
    We have found that large MPI jobs may hang at startup with mvapich2/2.3 and mvapich/2.3.1 (on any compiler dependency) due to a known bug that has been fixed in release 2.3.2. If users experience this issue, please switch to mvapich2/2.3.2

    Further Reading

    See Also

    Supercomputer: 
    Service: 
    Technologies: 
    Fields of Science: 

    Mathematica

    Mathematica is a mathematical computation program. It is capable in many areas of technical computing including but not limited to neural networks, machine learning, image processing, geometry, data science and visualizations.

    Availability and Restrictions

    Versions

    Mathematica is available on the Pitzer and Owens Clusters. The versions currently available at OSC are:

      Owens Pitzer
    13.2.1 X X

     

    You can use module spider mathematica to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access for Academic Users

    Use of Mathematica is open to academic Ohio State University usersOSC does not provide Mathematica licenses for outside of Ohio State University due to licensing restrictions. All users must be added to the system before using Mathematica. Please contact OSC Help to be granted access or for any license related questions.

    Publisher/Vendor/Repository and License Type

    Mathematica, commercial

    Usage

    Usage on Owens

    Set-up on Owens

    To load the default version of Mathematica module, use module load mathematica/13.2.1.

    Running Mathematica

    To run Mathematica, you should log into your osc account for OSC OnDemand. Then at the top of your screen navigate to the Interactive Apps dropdown menu. There you may select Mathematica and launch the task. After the application is available you can open and use Mathematica.

    Alternatively, you may request an OSC OnDemand desktop and load Mathematica with the command module load mathematica/13.2.1. Then you can run Mathematica by typing the command mathematica

    The command listed below will run Mathematica on the login node you are connected to. As the login node is a shared resource, running scripts that require significant computational resources will impact the usability of the cluster for others. As such, you should not use interactive Mathematica sessions on the login node for any significant computation. If your Mathematica script requires significant time, CPU power, or memory, you should run your code via the batch system.

    Usage on Pitzer

    Set-up on Pitzer

    To load the default version of Mathematica module, use module load mathematica/13.2.1.

    Running Mathematica

    To run Mathematica, you should log into your osc account for OSC OnDemand. Then at the top of your screen navigate to the Interactive Apps dropdown menu. There you may select Mathematica and launch the task. After the application is available you can open and use Mathematica.

    Alternatively, you may request an OSC OnDemand desktop and load Mathematica with the command module load mathematica/13.2.1. Then you can run Mathematica by typing the command mathematica

    The command listed below will run Mathematica on the login node you are connected to. As the login node is a shared resource, running scripts that require significant computational resources will impact the usability of the cluster for others. As such, you should not use interactive Mathematica sessions on the login node for any significant computation. If your Mathematica script requires significant time, CPU power, or memory, you should run your code via the batch system.

    Running Mathematica jobs with GPU

    A GPU can be utilized for Mathematica. You can acquire a GPU for the job by

    #SBATCH --gpus-per-node=1

    for Owens, or Pitzer. If running with an OnDemand desktop, select a GPU node to launch the desktop on.  For more detail, please read here.

     

    For more information about GPU computing for Mathematica, please read GPU Computing from Wolfram.

    Further Reading

     

    Supercomputer: 
    Service: 

    Miniconda3

    Miniconda3 is a free minimal installer for conda. It is a small, bootstrap version of Anaconda that includes only conda, Python, the packages they depend on, and a small number of other useful packages, including pip, zlib and a few others.

    Availability and Restrictions

    Versions

    Miniconda is available on the Ascend Cluster. The versions currently available at OSC are:

    Version Owens Pitzer Ascend
    4.10.3     X*
    4.10.3-py37 X* X*  
    4.12.0-py38 X X  
    4.12.0-py39 X X  
    23.3.1-py310 X X  

    * Current Default Version

    You can use module spider miniconda3 to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Miniconda3 is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Conda, Free use and redistribution under the terms of the EULA for Miniconda.

    Usage

    Supercomputer: 

    MotionCor2

    MotionCor2 uses multi-GPU acceleration to correct anisotropic cryo-electron microscopy images at the single pixel level across the whole frame, making it suitable for single particle and tomographic images. Iterative, patch-based motion detection is combined with spatial and temporal constraints and dose weighting.

    Availability and Restrictions

    Versions

    The following versions are available on OSC clusters:

    Version Pitzer
    1.4.4 X*
    * Current default version

    You can use module spider motioncor2 to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    MotionCor2 is available to academic/non-profit OSC users. Please review the vendor's webpage and the attached license agreement below before use. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    University of California San Francisco, License Agreement attached below.

     

    Usage

     

    Usage on Pitzer

    Set-up

    To configure your environment for use of motioncor2, run the following command: module load motioncor2. The default version will be loaded. To select a particular scipion version, use module load motioncor2/version. For example, use module load motioncor2/1.4.4 to load MotionCor2.1.4.4.

    Further Reading

    Documentation Attachment: 
    Supercomputer: 
    Service: 
    Technologies: 

    MuTect

    MuTect is a method developed at the Broad Institute for the reliable and accurate identification of somatic point mutations in next generation sequencing data of cancer genomes.

    Availability and Restrictions

    Versions

    The following versions of MuTect are available on OSC clusters:

    Version Owens
    1.1.7 X*
    * Current default version

    You can use  module spider mutect to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access for Academic Users

    MuTect is available to academic OSC users. Please review the license agreement carefully before use. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    The Broad Institute, Inc./ Freeware (academic)

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of MuTect, run the following command: module load mutect. The default version will be loaded. To select a particular MuTect version, use module load mutect/version. For example, use module load mutect/1.1.4 to load MuTect 1.1.4.
     
    NOTE: Java 1.7.0 will also need to be loaded in order to use MuTect: module load java/1.7.0.

    Usage

    This software is a Java executable .jar file; thus, it is not possible to add to the PATH environment variable.

    From module load mutect, a new environment variable, MUTECT, will be set.

    Thus, users can use the software by running the following command: java -jar $MUTECT {other options}.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    NAMD

    NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD generally scales well on OSC platforms and offers a variety of modelling techniques. NAMD is file-compatible with AMBER, CHARMM, and X-PLOR.

    Availability and Restrictions

    Versions

    The following versions of NAMD are available:

    Version Owens Pitzer
    2.11 X  
    2.12 X  
    2.13b2   X
    2.13 X* X*
    * Current default version
    *  IMPORTANT NOTE: You need to load correct compiler and MPI modules before you use NAMD. In order to find out what modules you need, use module spider namd/{version} .

    You can use  module spider namd  to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    NAMD is available to all OSC users for academic purpose.

    Publisher/Vendor/Repository and License Type

    TCBG, University of Illinois/ Open source (academic)

    Usage

    Set-up

    To load the NAMD software on the system, use the following command: module load namd/"version"  where "version" is the version of NAMD you require. The following will load the default or latest version of NAMD:  module load namd .

    Using NAMD

    NAMD is rarely executed interactively because preparation for simulations is typically performed with extraneous tools, such as, VMD.

    Batch Usage

    Sample batch scripts and input files are available here:

    ~srb/workshops/compchem/namd/
    

    The simple batch script for Owens below demonstrates some important points. It requests 56 processors and 2 hours of walltime. If the job goes beyond 2 hours, the job would be terminated.

    #!/bin/bash
    #SBATCH --job-name apoa1 
    #SBATCH --nodes=2 --ntasks-per-node=28
    #SBATCH --time=2:00:00
    #SBATCH --account=<project-account>
    
    module load intel/18.0.4
    module load mvapich2/2.3.6
    module load namd
    # SLURM_SUBMIT_DIR refers to the directory from which the job was submitted.
    for FILE in *
    do
        sbcast -p $FILE $TMPDIR/$FILE
    done
    # Use TMPDIR for best performance.
    cd $TMPDIR
    run_namd apoa1.namd
    sgather -pr $TMPDIR $SLURM_SUBMIT_DIR/output

    Or equivalently, on Pitzer:

    #!/bin/bash
    #SBATCH --job-name apoa1
    #SBATCH --nodes=2 --ntasks-per-node=48
    #SBATCH --time=2:00:00
    #SBATCH --account=<project-account>
    
    module load intel/18.0.4
    module load mvapich2/2.3.6
    module load namd
    # SLURM_SUBMIT_DIR refers to the directory from which the job was submitted.
    # the following loop assumes you have the necessary .namd, .pdb, .psf, and .xplor files
    # in the directory you are submitting the job from 
    for FILE in *
    do
        sbcast -p $FILE $TMPDIR/$FILE
    done
    # Use TMPDIR for best performance.
    cd $TMPDIR
    run_namd apoa1.namd
    sgather -pr $TMPDIR $SLURM_SUBMIT_DIR/output
    
    NOTE: ntaks-per-node should be a maximum of 28 on Owens and maximum of 48 on Pitzer.

    GPU support

    We have GPU support with NAMD 2.12 for Owens clusters. These temporarily use pre-compiled binaries due to installation issues.  For more detail, please read the corresponding example script:

    ~srb/workshops/compchem/namd/apoa1.namd212nativecuda.owens.pbs  # for Owens
    

    Further Reading

    Supercomputer: 
    Service: 

    NBO

    The Natural Bond Orbital (NBO) program is a discovery tool for chemical insights from complex wavefunctions. NBO is a broad suite of 'natural' algorithms for optimally expressing numerical solutions of Schrödinger's wave equation in the chemically intuitive language of Lewis-like bonding patterns and associated resonance-type 'donor-acceptor' interactions.

    Availability and Restrictions

    Versions

    NBO is available on Owens. The versions currently available at OSC are

    Version Owens
    6.0 X*
    * Current default version

    You can use  module spider nbo to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access for Academic Users 

    NBO is available to non-commercial users; simply contact OSC Help to request the appropriate form for access. 

    Publisher/Vendor/Repository and License Type

    University of Wisconsin System on behalf of the Theoretical Chemistry Institute, Non-Commercial

    Usage

    Usage on Owens

    To set up your environment for NBO load one of its modulefiles:

    module load nbo/6.0
    

    For documentation corresponding to a loaded version, see $OSC_NBO_HOME/man/.  Below is an example batch script that uses the i8 executables of NBO 6.0.  This script specifies the Bash shell; for C type shells convert the export command to setenv syntax.  The i4 executables are also installed and may be required by some quantum chemistry packages, e.g., ORCA as of Oct 2019. You can find other example inputs in ~srb/workshops/compchem/, such as ~srb/workshops/compchem/gaussian/nbo6.*.

    #!/bin/bash
    # Example NBO 6.0 batch script.
    #SBATCH --job-name nbo-ch3nh2
    #SBATCH --mail-type=ALL,END
    #SBATCH --time=0:10:00
    #SBATCH --nodes=1 --ntasks-per-node=1
    #SBATCH --account <account>
    
    qstat -f $SLURM_JOB_ID
    export
    module load nbo/6.0
    module list
    cd $SLURM_SUBMIT_DIR
    pbsdcp --preserve ch3nh2.47 $TMPDIR
    cd $TMPDIR
    export NBOEXE=$OSC_NBO_HOME/bin/nbo6.i8.exe
    gennbo.i8.exe ch3nh2.47
    ls -l
    pbsdcp --preserve --gather --recursive '*' $SLURM_SUBMIT_DIR
    

    Further Reading

    Supercomputer: 
    Service: 

    NCL/NCARG

    NCAR Graphics is a Fortran and C based software package for scientific visualization.  NCL (The NCAR Command Language), is a free interpreted language designed specifically for scientific data processing and visualization. It is a product of the Computational & Information Systems Laboratory at the National Center for Atmospheric Research (NCAR) and sponsored by the National Science Foundation. NCL has robust file input and output: it can read and write netCDF-3, netCDF-4 classic, HDF4, binary, and ASCII data, and read HDF-EOS2, GRIB1, and GRIB2. The graphics are based on NCAR Graphics.

    Availability and Restrictions

    Versions

    NCL/NCAR Graphics is available on Pitzer and Owens Cluster. The versions currently available at OSC are:

    Version Owens Pitzer Notes
    6.3.0  X(GI)    
    6.5.0 X(GI) X(GI) netcdf-serial and hdf5-serial required for NCL
    6.6.2 X(GI)* X(GI)* netcdf-serial and hdf5-serial required for NCL
    * Current default version; G = available with gnu; I = available with intel

    You can use  module spider ncarg to view available NCL/NCAR Graphics modules. Feel free to contact OSC Help if you need other versions for your work.

    Access 

    NCL/NCAR Graphics is available for use by all OSC users.

    Publisher/Vendor/Repository and License Type

    University Corporation for Atmospheric Research, Open source

    Usage

    Usage on Owens

    Set-up on Owens

    To load the default version use module load ncarg. To select a particular version, use module load ncarg/versionFor example, use module load ncarg/6.3.0 to load NCARG version 6.3.0 on Owens. For the default version of ncarg, use
    module load ncarg
    

    Batch Usage on Owens

    Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations for Owens and Scheduling Policies and Limits for more info. 
    Interactive Batch Session
    For an interactive batch session on Owens, one can run the following command:
    sinterative -A <project-account> -N 1 -n 28 -t 1:00:00
    
    which gives you one node and 28 cores (-N 1 -n 28) with 1 hour (-t 1:00:00). You may adjust the numbers per your need.
    Non-interactive Batch Job (Serial Run)
    A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. The following example batch script file will use the input file named interp1d_1.ncl.
    Below is the example batch script job.txt for a serial run:
    #!/bin/bash
    #SBATCH --time=1:00:00
    #SBATCH --nodes=1 --ntasks-per-node=28
    #SBATCH --job-name=job-name
    #SBATCH --account <project-account>
    
    module load ncarg
    cp interp1d_1.ncl $TMPDIR
    cd $TMPDIR
    ncl interp1d_1.ncl
    pbsdcp --gather --recursive --preserve '*' interp1d.ps $SLURM_SUBMIT_DIR
    

    In order to run it via the batch system, submit the job.txt file with the following command:

    sbatch job.txt
    

    Usage on Pitzer

    Set-up on Pitzer

    To load the default version use module load ncarg
    module load ncarg

    Batch Usage on Pitzer

    Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems.
    Interactive Batch Session
    For an interactive batch session on Pitzer, one can run the following command:
    sinteractive -A <project-account> -N 1 -n 48 -t 1:00:00
    
    which gives you 1 node (-N 1), 48 cores (-n 48), and 1 hour (-t 1:00:00). You may adjust the numbers per your need.
    Non-interactive Batch Job (Serial Run)
    batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. The following example batch script file will use the input file named interp1d_1.ncl.
    Below is the example batch script job.txt for a serial run:
    #!/bin/bash
    #SBATCH --time=1:00:00
    #SBATCH --nodes=1 --ntasks-per-ndoe=48
    #SBATCH --job-name=jobname
    #SBATCH --account <project-account> 
    
    module load ncarg
    module load netcdf
    module load hdf5
    cp interp1d_1.ncl $TMPDIR
    cd $TMPDIR
    ncl interp1d_1.ncl
    pbsdcp --gather --recursive --preserve '*' $SLURM_SUBMIT_DIR
    

    In order to run it via the batch system, submit the job.txt file with the following command:

    sbatch job.txt

    Further Reading

    Official documentation can be obtained from NCL homepage.

    Supercomputer: 
    Service: 
    Fields of Science: 

    NWChem

    NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters.

    Availability and Restrictions

    Versions

    The following versions of NWChem are available on OSC clusters:

    Version Owens Pitzer
    6.6 X  
    6.8 X X
    7.0 X* X*
    * Current default version

    You can use module spider nwchem to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    NWChem is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    EMSL, Pacific Northwest National Lab., Open source

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of NWChem, run the following command: module load nwchem. The default version will be loaded. To select a particular NWChem version, use module load nwchem/version. For example, use module load nwchem/6.6 to load NWChem 6.6.

    Usage on Pitzer

    Set-up

    To configure your environment for use of NWChem, run the following command: module load nwchem. The default version will be loaded. 

    Further Reading

    Supercomputer: 
    Service: 

    Ncview

    Ncview is a visual browser for netCDF format files. Typically you would use ncview to get a quick and easy, push-button look at your netCDF files. You can view simple movies of the data, view along various dimensions, take a look at the actual data values, change color maps, invert the data, etc.

    Availability and Restrictions

    Versions

    The following versions of Ncview are available on OSC clusters:

    Version Owens Pitzer
    2.1.7 X* X*
    * Current default version

    You can use  module spider ncview to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Ncview is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    David W. Pierce, Open source

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of Ncview, run the following command: module load ncview. The default version will be loaded. To select a particular Ncview version, use module load ncview/version. For example, use module load ncview/2.1.7 to load Ncview 2.1.7.

    Usage on Pitzer

    Set-up

    To configure your environment for use of Ncview, run the following command: module load ncview. The default version will be loaded. To select a particular Ncview version, use module load ncview/version. For example, use module load ncview/2.1.7 to load Ncview 2.1.7.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    NetCDF

    NetCDF (Network Common Data Form) is an interface for array-oriented data access and a library that provides an implementation of the interface. The netcdf library also defines a machine-independent format for representing scientific data. Together, the interface, library, and format support the creation, access, and sharing of scientific data.

    Availability and Restrictions

    Versions

    NetCDF is available on Pitzer and Owens Clusters. The versions currently available at OSC are:

    Version Owens Pitzer Ascend
    4.3.3.1 X    
    4.6.1 X X  
    4.6.2 X X  
    4.7.4 X* X*  
    4.8.1     X*
    * Current default version

    You can use module spider netcdf to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Additionally, the C++ interface version 4.3.0 and the Fortran interface version 4.4.2 is included in the netcdf/4.3.3.1 module.

    Access

    NetCDF is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    University Corporation for Atmospheric Research, Open source

    Usage

    Usage on Owens

    Set-up

    Initalizing the system for use of the NetCDF is dependent on the system you are using and the compiler you are using. To load the default NetCDF, run the following command: module load netcdf. To use the parallel implementation of NetCDF, run the following command instead: module load pnetcdf. To load a particular version, use  module load netcdf/version. For example, use  module load netcdf/4.3.3.1 to load NetCDF version 4.3.3.1. You can use module spider netcdf to view available modules.

    Building With NetCDF

    With the netcdf library loaded, the following environment variables will be available for use:

    Variable Use
    $NETCDF_CFLAGS Use during your compilation step for C or C++ programs.
    $NETCDF_FFLAGS Use during your compilation step for Fortran programs.
    $NETCDF_LIBS Use when linking your program to NetCDF.

     

    Similarly, when the pnetcdf module is loaded, the following environment variables will be available:

    VARIABLE USE
    $PNETCDF_CFLAGS Use during your compilation step for C programs.
    $PNETCDF_FFLAGS Use during your compilation step for Fortran programs.
    $PNETCDF_LIBS Use when linking your program to NetCDF.
     

     

    For example, to build the code myprog.c with the netcdf library you would use:

    icc -c $NETCDF_CFLAGS myprog.c
    icc -o myprog myprog.o $NETCDF_LIBS
    

    Batch Usage

    When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Non-interactive Batch Job (Serial Run)
    A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. You must load the netcdf or pnetcdf module in your batch script before executing a program which is built with the netcdf library. Below is the example batch script that executes a program built with NetCDF:
    #!/bin/bash
    #SBATCH --job-name=job-name
    #SBATCH --nodes=1 --ntasks-per-node=28
    #SBATCH --account <project-account>
    
    module load netcdf
    cp foo.dat $TMPDIR
    cd $TMPDIR
    appname < foo.dat > foo.out
    cp foo.out $SLURM_SUBMIT_DIR
    

    Usage on Pitzer

    Set-up

    Initalizing the system for use of the NetCDF is dependent on the system you are using and the compiler you are using. To load the default NetCDF, run the following command: module load netcdf

    Building With NetCDF

    With the netcdf library loaded, the following environment variables will be available for use:

    VARIABLE USE
    $NETCDF_CFLAGS Use during your compilation step for C or C++ programs.
    $NETCDF_FFLAGS Use during your compilation step for Fortran programs.
    $NETCDF_LIBS Use when linking your program to NetCDF.

     

    Similarly, when the pnetcdf module is loaded, the following environment variables will be available:

    VARIABLE USE
    $PNETCDF_CFLAGS Use during your compilation step for C programs.
    $PNETCDF_FFLAGS Use during your compilation step for Fortran programs.
    $PNETCDF_LIBS Use when linking your program to NetCDF.
     

     

    For example, to build the code myprog.c with the netcdf library you would use:

    icc -c $NETCDF_CFLAGS myprog.c
    icc -o myprog myprog.o $NETCDF_LIBS

    Batch Usage

    When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Non-interactive Batch Job (Serial Run)
    A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. You must load the netcdf or pnetcdf module in your batch script before executing a program which is built with the netcdf library. Below is the example batch script that executes a program built with NetCDF:
    #!/bin/bash 
    #SBATCH --job-name=job-name
    #SBATCH --nodes=1 --ntasks-per-node=48 
    #SBATCH --account <project-account> 
    
    module load netcdf 
    cp foo.dat $TMPDIR 
    cd $TMPDIR 
    appname < foo.dat > foo.out 
    cp foo.out $SLURM_SUBMIT_DIR
    

    Further Reading

    See Also

    Tag: 
    Supercomputer: 
    Service: 

    NetCDF-Serial

    NetCDF (Network Common Data Form) is an interface for array-oriented data access and a library that provides an implementation of the interface. The netcdf library also defines a machine-independent format for representing scientific data. Together, the interface, library, and format support the creation, access, and sharing of scientific data.

    For mpi-dependent codes, use the non-serial NetCDF module.

    Availability and Restrictions

    Versions

    NetCDF is available for serial code on on Pitzer and Owens Clusters. The versions currently available at OSC are:

    Version Owens Pitzer
    4.3.3.1 X  
    4.6.1 X X
    4.6.2 X X
    4.7.4 X* X*
    4.8.1 X X
    * Current default version

    You can use module spider netcdf-serial to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Additionally, the C++ and Fortran interfaces for NetCDF are included. After loading a netcdf-serial module, you can check their versions with ncxx4-config --version and nf-config --version, respectively.

    Access

    NetCDF is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    University Corporation for Atmospheric Research, Open source

    Usage

    Usage on Owens

    Set-up

    Initalizing the system for use of the NetCDF is dependent on the system you are using and the compiler you are using. To load the default serial NetCDF module, run the following command: module load netcdf-serial. To load a particular version, use  module load netcdf-serial/version. For example, use  module load netcdf-serial/4.3.3.1 to load NetCDF version 4.3.3.1. You can use module spider netcdf-serial to view available modules.

    Building With NetCDF

    With the netcdf library loaded, the following environment variables will be available for use:

    Variable Use
    $NETCDF_CFLAGS Use during your compilation step for C or C++ programs.
    $NETCDF_FFLAGS Use during your compilation step for Fortran programs.
    $NETCDF_LIBS Use when linking your program to NetCDF.

     

    Similarly, when the pnetcdf module is loaded, the following environment variables will be available:

    VARIABLE USE
    $PNETCDF_CFLAGS Use during your compilation step for C programs.
    $PNETCDF_FFLAGS Use during your compilation step for Fortran programs.
    $PNETCDF_LIBS Use when linking your program to NetCDF.
     

     

    For example, to build the code myprog.c with the netcdf library you would use:

    icc -c $NETCDF_CFLAGS myprog.c
    icc -o myprog myprog.o $NETCDF_LIBS
    

    Batch Usage

    When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Non-interactive Batch Job (Serial Run)
    A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. You must load the netcdf or pnetcdf module in your batch script before executing a program which is built with the netcdf library. Below is the example batch script that executes a program built with NetCDF:
    #PBS -N AppNameJob
    #PBS -l nodes=1:ppn=28
    module load netcdf
    cd $PBS_O_WORKDIR
    cp foo.dat $TMPDIR
    cd $TMPDIR
    appname < foo.dat > foo.out
    cp foo.out $PBS_O_WORKDIR
    

    Usage on Pitzer

    Set-up

    Initalizing the system for use of the NetCDF is dependent on the system you are using and the compiler you are using. To load the default serial NetCDF module, run the following command: module load netcdf-serial. To load a particular version, use  module load netcdf-serial/version. For example, use  module load netcdf-serial/4.6.2 to load NetCDF version 4.6.2. You can use module spider netcdf-serial to view available modules.

    Building With NetCDF

    With the netcdf library loaded, the following environment variables will be available for use:

    VARIABLE USE
    $NETCDF_CFLAGS Use during your compilation step for C or C++ programs.
    $NETCDF_FFLAGS Use during your compilation step for Fortran programs.
    $NETCDF_LIBS Use when linking your program to NetCDF.

     

    Similarly, when the pnetcdf module is loaded, the following environment variables will be available:

    VARIABLE USE
    $PNETCDF_CFLAGS Use during your compilation step for C programs.
    $PNETCDF_FFLAGS Use during your compilation step for Fortran programs.
    $PNETCDF_LIBS Use when linking your program to NetCDF.
     

     

    For example, to build the code myprog.c with the netcdf library you would use:

    icc -c $NETCDF_CFLAGS myprog.c
    icc -o myprog myprog.o $NETCDF_LIBS

    Further Reading

    See Also

    Supercomputer: 
    Service: 

    Neuropointillist

    Neuropointillist is an in-development R package which defines functions to help scientists to run voxel-wise models using R neuroimaging data.

    Availability and Restrictions

    Versions

    The following versions are available on OSC clusters:

    Version Pitzer
    0.0.0.9000 X*
    * Current default version

    You can use module spider neuropointillist to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Neuropointillist is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Free and open source.

    MIT License

    Copyright (c) 2018 Tara Madhyastha

    Full license information available through LICENSE file in the software.

    Usage

    Usage on Pitzer

    Set-up

    To configure your environment for use of Neuropointillist, run the following command:  module load neuropointillist. The default version will be loaded. To select a particular version, use  module load neuropointillist/version. For example, use  module load neuropointillist/0.0.0.9000 to load Neuropointillist 0.0.0.9000.

    Neuropointillist is an R package, so you need to load the R module before you can use it in R.

    module load R/4.0.2-gnu9.1
    module load neuropointillist

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    Nextflow

    Nextflow is a workflow system for creating scalable, portable, and reproducible workflows. Nextflow is based on the dataflow programming model which simplifies complex distributed pipelines.

    Availability and Restrictions

    Versions

    Nextflow is available on the Pitzer and Owens clusters. The versions currently available at OSC are:

    Version Owens Pitzer
    20.07.1   X*
    20.10.0 X* X
    21.10.3 X X
    * Current Default Version

    You can use module spider nextflow to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Nextflow is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Developed by Seqera and distributed under Apache 2.0 license, open-source

    Usage

    Usage on Owens

    Set-up

    To load the default Nextflow library, run the following command: module load nextflow. To load a particular version, use module load nextflow/version. For example, use module load nextflow/21.10.3 to load Nextflow version 21.10.3. You can use module spider nextflow to view available modules.

    Batch Usage

    When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Usage on Pitzer

    Set-up

    To load the default Nextflow library, run the following command: module load nextflow. To load a particular version, use module load nextflow/version. For example, use module load nextflow/21.10.3 to load Nextflow version 21.10.3. You can use module spider nextflow to view available modules.

    Batch Usage

    When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Further Reading

    Tag: 
    Supercomputer: 
    Service: 

    Nodejs

    Nodejs is used to create server-side web applications, and it is perfect for data-intensive applications since it uses an asynchronous, event-driven model

    Availability and Restrictions

    Versions

    Nodejs is available on the Pitzer, Owens, and Ascend Clusters. The versions currently available at OSC are:

    Version Owens Pitzer Ascend
    14.17.3  X* X* X
    18.18.2 X X X
    * Current Default Version

    You can use module spider Nodejs to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Nodejs is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    OpenJS Foundation, Open source 

    Usage

    Usage on Owens

    Set-up

    To load the default Nodejs library, run the following command: module load nodejs. To load a particular version, use module load nodejs/version. For example, use module load nodejs/14.17.3 to load Nodejs version 14.17.3. You can use module spider nodejs to view available modules.

    Nodejs version 18.18.2 Usage

    Nodejs verion 18.18.2 is contianerized. To learn more about containers see: HOWTO: Use Docker and Apptainer/Singularity Containers at OSC

    To use nodejs/18.18.2 simply run:

    node

    or 

    apptainer exec $NODE_IMG node 

    Both of the above commands will also work with additonal command line arguments such as node script.js and apptainer exec $NODE_IMG node script.js.

     

    If you need to use npm with node/18.18.2 then you will need to first open a shell in the container. To do so run:

    node_shell

    or 

    apptainer shell $NODE_IMG

    Now within this shell you can run node and npm.

    Batch Usage

    When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Usage on Pitzer

    Set-up

    To load the default Nodejs library, run the following command: module load nodejs. To load a particular version, use module load nodejs/version. For example, use module load nodejs/14.17.3 to load Nodejs version 14.17.3. You can use module spider nodejs to view available modules.

    Nodejs version 18.18.2 Usage

    Nodejs verion 18.18.2 is contianerized. To learn more about containers see: HOWTO: Use Docker and Apptainer/Singularity Containers at OSC

    To use nodejs/18.18.2 simply run:

    node

    or 

    apptainer exec $NODE_IMG node 

    Both of the above commands will also work with additonal command line arguments such as node script.js and apptainer exec $NODE_IMG node script.js.

     

    If you need to use npm with node/18.18.2 then you will need to first open a shell in the container. To do so run:

    node_shell

    or 

    apptainer shell $NODE_IMG

    Now within this shell you can run node and npm.

    Batch Usage

    When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Further Reading

    Tag: 
    Supercomputer: 
    Service: 

    OMB

    OSU Micro-Benchmarks tests are a collection of MPI performance tests to measure the latency, bandwidth, and other properties of MPI libraries.

    Availability and Restrictions

    Versions

    The following versions of OMB are available on OSC clusters:

    Version Owens Pitzer
    5.4.3 X* X*
    * Current default version

    You can use module spider omb to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    OMB is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    The Ohio State University, Open source

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of OMB, run the following command: module load omb. The default version will be loaded. To select a particular OMB version, use module load omb/version

    Usage on Pitzer

    Set-up

    To configure your environment for use of OMB, run the following command: module load omb. The default version will be loaded.

    Further Reading

    Supercomputer: 

    ORCA

    ORCA is an ab initio quantum chemistry program package that contains modern electronic structure methods including density functional theory, many-body perturbation, coupled cluster, multireference methods, and semi-empirical quantum chemistry methods. Its main field of application is larger molecules, transition metal complexes, and their spectroscopic properties. ORCA is developed in the research group of Frank Neese. Visit ORCA Forum for additional information.

    Availability and Restrictions

    Versions

    ORCA is available on the OSC clusters. These are the versions currently available:

    Version Owens Pitzer Notes
    4.0.1.2 X X openmpi/2.1.6-hpcx
    4.1.0 X X openmpi/3.1.4-hpcx
    4.1.1 X X openmpi/3.1.4-hpcx
    4.1.2 X X openmpi/3.1.4-hpcx
    4.2.1 X* X* openmpi/3.1.6-hpcx
    5.0.0 X X openmpi/5.0.2-hpcx
    5.0.2 X X openmpi/5.0.2-hpcx
    5.0.3 X X openmpi/5.0.2-hpcx
    5.0.4 X X openmpi/5.0.2-hpcx
    * Current default version. The notes indicate the MPI module likely to produce the best performance, but see the Known Issue below named "Bind to CORE".

    You can use module spider orca to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    ORCA is available to OSC academic users; users need to sign up ORCA Forum. You will receive a registration confirmation email from the ORCA management. Please contact OSC Help with the confirmation email for access.

    Publisher/Vendor/Repository and License Type

    ORCA, Academic (Computer Center)

    Usage

    Usage on Owens

    Set-up

    ORCA usage is controlled via modules. Load one of the ORCA modulefiles at the command line, in your shell initialization script, or in your batch scripts. To load the default version of ORCA module, use module load orca. To select a particular software version, use module load orca/version. For example, use module load orca/4.1.0to load ORCA version 4.1.0 on Owens. 

    IMPORTANT NOTE: You need to load correct compiler and MPI modules before you use ORCA. In order to find out what modules you need, use module spider orca/{version}.

    Batch Usage

    When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.  Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.

    Interactive Batch Session

    For an interactive batch session one can run the following command:

    sinteractive -A <project-account> -N 1 -n 1 -t 00:20:00
    

    which requests one core (-N 1 -n 1), for a walltime of 20 minutes (-t 00:20:00). You may adjust the numbers per your need.

    Non-interactive Batch Job

    batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Below is the example batch script for a parallel run:

    #!/bin/bash
    #SBATCH --job-name orca_mpi_test
    #SBATCH --time=0:10:0
    #SBATCH --nodes=2 --ntasks-per-node=28
    #SBATCH --account 
    #SBATCH --gres=pfsdir
    
    module reset
    module load openmpi/3.1.6-hpcx
    module load orca/4.2.1
    module list
    
    cp  h2o_b3lyp_mpi.inp $PFSDIR/h2o_b3lyp_mpi.inp
    cd $PFSDIR
    
    $ORCA/orca h2o_b3lyp_mpi.inp > h2o_b3lyp_mpi.out
    ls
    
    cp h2o_b3lyp_mpi.out $SLURM_SUBMIT_DIR/h2o_b3lyp_mpi.out
    

     

    Usage on Pitzer

    Set-up

    ORCA usage is controlled via modules. Load one of the ORCA modulefiles at the command line, in your shell initialization script, or in your batch scripts. To load the default version of ORCA module, use module load orca. To select a particular software version, use module load orca/version. For example, use module load orca/4.1.0to load ORCA version 4.1.0 on Pitzer. 

    IMPORTANT NOTE: You need to load correct compiler and MPI modules before you use ORCA. In order to find out what modules you need, use module spider orca/{version}.

    Batch Usage

    When you log into pitzer.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.  Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.

    Interactive Batch Session

    For an interactive batch session one can run the following command:

    sinteractive -A <project-account> -N 1 -n 1 -t 00:20:00
    

    which requests one node with one core (-N 1 -n 1), for a walltime of 20 minutes (-t 00:20:00). You may adjust the numbers per your need.

    Non-interactive Batch Job

    batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Below is the example batch script for a parallel run:

    #!/bin/bash
    #SBATCH --job-name orca_mpi_test
    #SBATCH --time=0:10:0
    #SBATCH --nodes=2 --ntasks-per-node=28
    #SBATCH --account 
    #SBATCH --gres=pfsdir
    
    module reset
    module load openmpi/3.1.6-hpcx
    module load orca/4.2.1
    module list
    
    cp  h2o_b3lyp_mpi.inp $PFSDIR/h2o_b3lyp_mpi.inp
    cd $PFSDIR
    
    $ORCA/orca h2o_b3lyp_mpi.inp > h2o_b3lyp_mpi.out
    ls
    
    cp $PFSDIR/h2o_b3lyp_mpi.out $SLURM_SUBMIT_DIR/h2o_b3lyp_mpi.out
    

    Known Issues

    Multi-node job hang 

    Resolution: Resolved
    Update: 03/13/2024
    Version: 5.0.x

    You may experience a multi-node job hang if the job runs into a module that requires heavy I/O, e.g., CCSD. Additionally, it potentially leads to our GPFS performance issue. We have identified the issue as related to the MPI I/O issue of OpenMPI 4.1. To remedy this, we will take the following procedures:

    On April 15, 2024, we will deprecate all ORCA 5.0.x modules installed under OpenMPI 4.1.x. It is highly recommended to switch to orca/5.0.4 under openmpi/5.0.2-hpcx with intel/19.0.5 or intel/2021.10.0. If you need another ORCA version, please inform us.

    Intermittent failure of default CPU binding

    Name: Bind to CORE
    Resolution: Resolved (workaround)
    Update: 4/27/2023
    Version: At least through 5.0.4

    The default CPU binding for ORCA jobs can fail sporadically.  The failure is almost immediate and produces a cryptic error message, e.g.

    $ORCA/orca h2o.in
    .
    .
    .
    --------------------------------------------------------------------------
    A request was made to bind to that would result in binding more
    processes than cpus on a resource:
    
    Bind to: CORE
    Node: o0033
    #processes: 2
    #cpus: 1
    
    You can override this protection by adding the "overload-allowed"
    option to your binding directive.
    --------------------------------------------------------------------------
    .
    .
    .
    [file orca_tools/qcmsg.cpp, line 465]:
    .... aborting the run 
    

    Three workarounds are known.  Invoke ORCA without CPU binding:

    $ORCA/orca h2o.in "--bind-to none"

    Use a non hpcx MPI module with ORCA:

    module load openmpi/4.1.2-tcp orca/5.0.4
    $ORCA/orca h2o.in

    Use more SLURM ntasks relative to ORCA nprocs which does not prevent the failure but merely reduces it's likelyhood:

    #SBATCH --ntasks=10
    cat << EOF > h2o.in
    %pal
      nprocs 5
    end
    .
    .
    .
    EOF
    $ORCA/orca h2o.in

    Note that each workaround can have performance side effects, and the last workaround can have direct charging consequences.  We recommend that users benchmark their jobs to gauge the most desirable approach.

    Immediate failure of MPI job

    Resolution: Resolved
    Update: 10/24/2022
    Version: 4.1.2, 4.2.1, 5.0.0 and above

    If you have found your MPI job failed immediately, please remove all extra parameters for mpirun from the command line, e.g.

    $ORCA/orca h2o_b3lyp_mpi.inp "--machinefile $PBS_NODEFILE"  > h2o_b3lyp_mpi.out

    to

    $ORCA/orca h2o_b3lyp_mpi.inp > h2o_b3lyp_mpi.out
    

    We found a bug from ORCA and OpenMPI with recent SLURM update, which causes a multi-node MPI job failed immediately. We realize that OpenMPI community does not keep up with SLURM so we decide to make a permanent change to replace mpirun used in ORCA with srun

    ORCA 4.1.0 issue with scratch filesystem

    Resolution: Resolved
    Update: 04/17/2019 
    Version: 4.1.0

    For a MPI job that request multiple nodes, the job can be run from a globally accessible working directory, e.g., home or scratch directories. It is useful if one needs more space for temporary files. However, ORCA 4.1.0 CANNOT run a job on our scratch filesystem. The issue has been reported on ORCA forum.  This issue has been resolved in ORCA 4.1.2. In the examples listed, scratch storage was used (--gres=pfsdir & $PFSDIR).

    Further Reading

    User manual is available from the ORCA Forum
    Job submission information is available from the Batch Submission Guide
    Scratch Storage information is availiable from the Storage Documentation

     

    Supercomputer: 
    Service: 

    Octave

    Octave is a high-level language, primarily intended for numerical computations. It provides a convenient command-line interface for solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly compatible with Matlab. It may also be used as a batch-oriented language.

    Octave has extensive tools for solving common numerical linear algebra problems, finding the roots of nonlinear equations, integrating ordinary functions, manipulating polynomials, and integrating ordinary differential and differential-algebraic equations. It is easily extensible and customizable via user-defined functions written in Octave's own language, or using dynamically loaded modules written in C++, C, Fortran, or other languages.

    Availability and Restrictions

    Versions

    The following versions of Octave are available on Owens Clusters:

    Version Owens
    4.0.3 X*
    * Current default version

    You can use module spider octave to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Octave is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    https://www.gnu.org/software/octave/, Open source

    Usage

    Set-up

    To initialize Octave, run the following command:

    module load octave

    Using Octave

    To run Octave, simply run the following command:

    octave

    Batch Usage

    The following example batch script will an octave code file mycode.o via the batch processing system. The script requests one full node of cores on Oakley and 1 hour of walltime.

    #!/bin/bash
    #SBATCH --job-name=AppNameJob
    #SBATCH --nodes=1 --ntasks-per-node=12
    #SBATCH --time=01:00:00
    #SBATCH --licenses=appname
    #SBATCH --account <project-account>
    
    module load octave
    cp mycode.o $TMPDIR
    cd $TMPDIR
    
    octave < mycode.o > data.out
    
    cp data.out $SLURM_SUBMIT_DIR
    

    Working with Packages

    See the Octave 4.0.1 documentation on working with packages.

    To install a package, launch an Octave session and type the pkg list command to see if there are any packages within your user scope.  There is an issue where global packages may not be seen by particular Octave versions.  To see the location of the global packages file use the command pkg global_list.

    If you are having trouble installing your own packages, you can use the system-wide packages. Due to issues with system-wide installation, you will need to copy the system-wide package installation file to your local package installation file with cp $OCTAVE_PACKAGES $HOME/.octave_packages.

    Then via pkg list you should see packages that you can load.  This is clearly not portable and needs to be reperformed within a job script if you are using packages across multiple clusters.

    Further Reading

    See Also

    Supercomputer: 
    Service: 

    OpenACC

    OpenACC is a standard for parallel programming on accelerators, such as Nvidia GPUs and Intel Phi. It consists primarily of a set of compiler directives for executing code on the accelerator, in C and Fortran. OpenACC is currently only supported by the PGI compilers installed on OSC systems.

    Availability and Restrictions

    OpenACC is available to all OSC users. It is supported by the PGI compilers. If you have any questions, please contact OSC Help.

    Usage

    Set-up

    OpenACC support is built into the compilers. There is no separate module to load.

    Building With OpenACC

    To build a program with OpenACC, use the compiler flag appropriate to your compiler. The correct libraries are included implicitly.

    Compiler Family Flag
    PGI -acc -ta=nvidia -Minfo=accel

    Batch Usage

    An OpenACC program will not run without an acelerator present. You need to ensure that your PBS resource request includes GPUs. For example, to run an OpenACC program on Owens, your resource request should look something like this: #PBS -l nodes=1:ppn=28:gpus=2.

    Further Reading

    See Also

    Service: 
    Fields of Science: 

    OpenCV

    OpenCV is an open-source library that includes several hundreds of computer vision algorithms.

    Availability and Restrictions

    Versions

    Version Ascend Pitzer Owens Notes
    2.4.5   X# X#  
    3.4.6 X#      
    4.5.4   X*    
    4.6.0   X    
    * Current default version; # System version

    You can use module spider opencv to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    OpenCV is available to all OSC users.

    Publisher/Vendor/Repository and License Type

    OpenCV versions after 4.5.0 fall under the Apache 2 license. Full details are available here.

    Usage

    Legacy usage

    Set-up

    The legacy system version does not need to be loaded.   Keep in mind that it dates to many years ago.  In general, it should be used with other tools from the same era, e.g., the system compiler version, which can be selected for your environment via module load gnu/4.8.5 on Owens and Pitzer.

    Usage on Pitzer

    Set-up on Pitzer

    To load the default version of the OpenCV module which initalizes your environment for non legacy OpenCV, use module load opencv. To select a particular OpenCV version, use module load opencv/version. For example, use module load opencv/4.5.4 to load OpenCV 4.5.4.

    In general users should employ the helper variables defined by an OpenCV module, e.g., module load gnu/9.1.0 cuda/11.2.2 opencv/4.5.4; g++ $OPENCV_INCLUDE $OPENCV_LIB bla bla.  A complete  example is available; for its location and other installation details see the output of module spider opencv/4.5.4.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    OpenFOAM

    OpenFOAM is a suite of computational fluid dynamics applications. It contains myriad solvers, both compressible and incompressible, as well as many utilities and libraries.

    Availability and Restrictions

    Versions

    The following versions of OpenFOAM are available on OSC clusters:

    Version Owens Pitzer

    4.1

    X

     

    5.0 X X
    7.0 X* X*
    1906  

    X

    1912   X
    2306 X X

    The location of OpenFOAM may be dependent on the compiler/MPI software stack, in that case, you should use one or both of the following commands (adjusting the version number) to learn how to load the appropriate modules:

    module spider openfoam
    module spider openfoam/2306
    

    Feel free to contact OSC Help if you need other versions for your work.

    Access 

    OpenFOAM is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    OpenFOAM Foundation, Open source

    Basic Structure for an OpenFOAM Case

    The basic directory structure for an OpenFOAM case is:

    /home/yourusername/OpenFOAM_case
    |-- 0
    |-- U
    |-- epsilon
    |-- k
    |-- p
    `-- nut 
    |-- constant
    |-- RASProperties
    |-- polyMesh
    |   |-- blockMeshDict
    |   `-- boundary
    |-- transportProperties
    `-- turbulenceProperties
    |-- system
    |-- controlDict
    |-- fvSchemes
    |-- fvSolution
    `-- snappyHexMeshDict
    

    IMPORTANT: To run in parallel, you need to also create the decomposeParDict file in the system directory. If you do not create this file, the decomposePar command will fail.

    Usage

    Usage on Owens

    Setup on Owens

    To configure the Owens cluster for the use of OpenFOAM 4.1, use the following commands:
    module load openmpi/1.10-hpcx # currently only 4.1 is installed using OpenMPI libraries
    module load openfoam/4.1
    

    Batch Usage on Owens

    Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems.

    On Owens, refer to Queues and Reservations for Owens and Scheduling Policies and Limits for more info. 

    Interactive Batch Session

    For an interactive batch session on Owens, one can run the following command:

    sinteractive -A <project-account> -N 1 -n 40 -t 1:00:00
    

    which gives you 28 cores (-N 1 -n 28) with 1 hour (-t 1:00:00). You may adjust the numbers per your need. 

    Non-interactive Batch Job (Serial Run)

    A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Below is the example batch script (job.txt) for a serial run:

    #!/bin/bash
    #SBATCH --job-name serial_OpenFOAM 
    #SBATCH --nodes=1 --ntasks-per-node=1 
    #SBATCH --time 24:00:00
    #SBATCH --account <project-account>
    
    # Initialize OpenFOAM on Owens Cluster
    module load openmpi/1.10-hpcx
    module load openfoam
    
    # Copy files to $TMPDIR and move there to execute the program
    cp * $TMPDIR
    cd $TMPDIR
    # Mesh the geometry
    blockMesh
    # Run the solver
    icoFoam
    # Finally, copy files back to your home directory
    cp * $SLURM_SUBMIT_DIR
    

    To run it via the batch system, submit the job.txt file with the following command:

    sbatch job.txt
    
    Non-interactive Batch Job (Parallel Run)

    Below is the example batch script (job.txt) for a parallel run:

    #!/bin/bash
    #SBATCH --job-name parallel_OpenFOAM 
    #SBATCH --nodes=2 --ntasks-per-node=28
    #SBATCH --time=6:00:00
    #SBATCH --account <project-account>
    
    # Initialize OpenFOAM on Ruby Cluster
    # This only works if you are using default modules
    module load openmpi/1.10-hpcx 
    module load openfoam/2.3.0 
    
    # Mesh the geometry
    blockMesh
    # Decompose the mesh for parallel run
    decomposePar
    # Run the solver
    mpiexec simpleFoam -parallel 
    # Reconstruct the parallel results
    reconstructPar

    Usage on Pitzer

    Setup on Pitzer

    To configure the Pitzer cluster for the use of OpenFOAM 5.0, use the following commands:
    module load openmpi/3.1.0-hpcx # currently only 5.0 is installed using OpenMPI libraries
    module load openfoam/5.0
    

    Batch Usage on Pitzer

    Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems.

    On Pitzer, refer to Queues and Reservations for Pitzer and Scheduling Policies and Limits for more info. 

    Interactive Batch Session

    For an interactive batch session on Owens, one can run the following command:

    sinteractive -A <project-account> -N 1 -n 40 -t 1:00:00
    

    which gives you 1 node (-N 1), 40 cores (-n 40) with 1 hour (-t 1:00:00). You may adjust the numbers per your need. 

    Non-interactive Batch Job (Serial Run)

    batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. Below is the example batch script (job.txt) for a serial run:

    #!/bin/bash
    #SBATCH --job-name serial_OpenFOAM 
    #SBATCH --nodes=1 --ntasks-per-node=1
    #SBATCH --time 24:00:00 
    #SBATCH --account <project-account>
    
    # Initialize OpenFOAM on Owens Cluster
    module load openmpi/3.1.0-hpcx
    module load openfoam
    
    # Copy files to $TMPDIR and move there to execute the program
    cp * $TMPDIR
    cd $TMPDIR
    # Mesh the geometry
    blockMesh
    # Run the solver
    icoFoam
    # Finally, copy files back to your home directory
    cp * $SLURM_SUBMIT_DIR
    

    To run it via the batch system, submit the job.txt file with the following command:

    sbatch job.txt
    
    Non-interactive Batch Job (Parallel Run)

    Below is the example batch script (job.txt) for a parallel run:

    #!/bin/bash
    #SBATCH --job-name parallel_OpenFOAM
    #SBATCH --nodes=2 --ntasks-per-node=40
    #SBATCH --time=6:00:00
    #SBATCH --account <project-account>
    
    # Initialize OpenFOAM on Ruby Cluster
    # This only works if you are using default modules
    module load openmpi/3.1.0-hpcx 
    module load openfoam/5.0
    
    # Mesh the geometry
    blockMesh
    # Decompose the mesh for parallel run
    decomposePar
    # Run the solver
    mpiexec simpleFoam -parallel 
    # Reconstruct the parallel results
    reconstructPar

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    OpenMP

    OpenMP is a standard for parallel programming on shared-memory systems, including multicore systems. It consists primarily of a set of compiler directives for sharing work among multiple threads. OpenMP is supported by all the Fortran, C, and C++ compilers installed on OSC systems.

    Availability and Restrictions

    OpenMP is available to all OSC users. It is supported by the Intel, PGI, and gnu compilers. If you have any questions, please contact OSC Help.

    Usage

    Set-up

    OpenMP support is built into the compilers. There is no separate module to load.

    Building With OpenMP

    To build a program with OpenMP, use the compiler flag appropriate to your compiler. The correct libraries are included implicitly.

    Compiler Family Flag
    Intel -qopenmp or  -openmp
    gnu -fopenmp
    PGI -mp

    Batch Usage

    An OpenMP program by default will use a number of threads equal to the number of processor cores available. To use a different number of threads, set the environment variable OMP_NUM_THREADS.

    Further Reading

    See Also

    Service: 
    Fields of Science: 

    OpenMPI

    MPI is a standard library for performing parallel processing using a distributed memory model. The Ruby, Owens, and Pitzer clusters at OSC can use the OpenMPI implementation of the Message Passing Interface (MPI).

    Availability and Restrictions

    Versions

    Installations are available for the Intel, PGI, and GNU compilers. The following versions of OpenMPI are available on OSC systems:

    Version Owens Pitzer Ascend Notes
    1.10.7-hpcx X X    
    1.10.7 X X    
    2.1.6-hpcx X X    
    2.1.6 X X    
    3.1.4-hpcx X X    
    3.1.4 X X    
    3.1.6-hpcx X X    
    3.1.6     X HPC-X version**
    4.0.3-hpcx X* X*    
    4.0.3 X X    
    4.0.7-hpcx X      
    4.1.2-hpcx X X    
    4.1.3     X* HPC-X version**
    4.1.4-hpcx X X    
    4.1.5/4.1.5-hpcx X X X HPC-X version**
    5.0.2-hpcx   X    
    * Current default version
    ** The HPCX version is OpenMPI built with communication libraries from NVIDIA HPC-X for the optimized performance. 

    You can use module spider openmpi to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    OpenMPI is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    https://www.open-mpi.org, Open source

    Usage

    Setup on OSC Clusters

    To set up your environment for using the MPI libraries, you must load the appropriate module. On any OSC system, this is performed by:

    module load openmpi
    

    You will get the default version for the compiler you have loaded.

    Building With MPI

    To build a program that uses MPI, you should use the compiler wrappers provided on the system. They accept the same options as the underlying compiler. The commands are shown in the following table:

    C mpicc
    C++ mpicxx
    FORTRAN 77 mpif77
    Fortran 90 mpif90

    For example, to build the code my_prog.c using the -O2 option, you would use:

    mpicc -o my_prog -O2 my_prog.c
    

    In rare cases, you may be unable to use the wrappers. In that case, you should use the environment variables set by the module.

    Variable Use
    $MPI_CFLAGS Use during your compilation step for C programs.
    $MPI_CXXFLAGS Use during your compilation step for C++ programs.
    $MPI_FFLAGS Use during your compilation step for Fortran 77 programs.
    $MPI_F90FLAGS Use during your compilation step for Fortran 90 programs.

    Batch Usage

    Programs built with MPI can only run in the batch environment at OSC. For information on starting MPI programs using the command srun see Job Scripts.

    Be sure to load the same compiler and OpenMPI modules at execution time as at build time.

    Run a MPI program 

    SRUN

    We recommend the command srun as the default MPI launcher. Please refer to Pitzer Programming Environment or Owens Programming Environment for detail.

    Known Issues

    OpenMPI-HPCX 4.1.x hangs on writing files on a shared file system 

    Resolution: Resolved (workaround)
    Update: 03/06/2024
    Version: All 4.1.x-hpcx versions

    Your job utilizing openmpi/4.1.x-hpcx (or 4.1.x on Ascend) might hang while writing files on a shared file system. This issue is caused by a bug stemming from the default OMPIO I/O module and UCX library. We have identified ORCA as being affected by this problem. If you are experiencing this issue, please consider the following solutions:

    • Change the I/O module to ROMIO by adding export OMPI_MCA_io=romio321 to your job script.
    • Switch to OpenMPI 5. You can check for available OpenMPI 5 moduless via module spider openmpi/5.

    The use of MPI_THREAD_MULTIPLE with OpenMPI-HPCX 4.x is not supported

    Resolution: Resolved (workaround)
    Update: 7/10/2023
    Version: [Owens] openmpi/4.0.3-hpcx, openmpi/4.1.2-hpcx, openmpi/4.1.4-hpcx
                  [Ascend] openmpi/4.1.3

    If a threading code uses MPI_Init_thread with MPI_THREAD_MULTIPLE, it will fail because the UCX framework from the HPCX package is built without multi-threading support. UCX is the default framework for OMPI 4.0 and above. 

    If you encounter this issue, you can now use "openmpi/4.0.7-hpcx" and "openmpi/4.1.5-hpcx" on Owens, and "openmpi/4.1.5" on Ascend. These versions are built with multi-threading UCX.

    Cannot use mpiexec/mpirun from OpenMPI in an interactive session

    Resolution: Resolved (workaround)
    Update: 2/22/2022
    Version: All

    The mpiexec and mpirun commands are not part of the MPI standard and may differ slightly between MPI implementations. On February 22, 2022, OSC upgraded Slurm to version 21.08.5, and we discovered additional issues with mpiexec and mpirun. Therefore, we recommend using srun in all cases.

    If you need to use mpiexec and your job fails, please contact OSC Help for assistance.

    Further Reading

    See Also

    Supercomputer: 
    Service: 
    Technologies: 
    Fields of Science: 

    PAPI

    PAPI provides the tool designer and application engineer with a consistent interface and methodology for use of the performance counter hardware found in most major microprocessors. PAPI enables software engineers to see, in near real time, the relation between software performance and processor events.

    This software will be of interest only to HPC experts.

    Availability and Restrictions

    Versions

    PAPI is available on Pitzer, Ruby, and Owens Clusters. The versions currently available at OSC are:

    Version Owens Pitzer
    5.6.0 X* X*
    * Current default version

    You can use module spider papi to view available modules for a given machine. For now, PAPI is available only with the Intel and gnu compilers. Feel free to contact OSC Help if you need other versions for your work.

    Access

    PAPI is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Innovation Computing Lab, University of Tennessee/ Open source

    Usage

    Usage on Owens

    Set-up

    Since PAPI version 5.2.0 is a System Install, no module is needed to run the application. To load a different version of the PAPI library, run the following command: module load papi. To load a particular version, use module load papi/version. For example, use  module load papi/5.6.0 to load PAPI version 5.6.0. You can use module spider papi to view available modules.

    Building With PAPI

    To build the code myprog.c with the PAPI 5.2.0 library you would use:

    gcc -c myprog.c -lpapi
    gcc -o myprog myprog.o -lpapi
    

    For other versions, the PAPI library provides the following variables for use at build time:

    VARIABLE USE
    $PAPI_CFLAGS Use during your compilation step for C/C++ programs
    $PAPI_FFLAGS Use during your compilation step for FORTRAN programs 
    $PAPI_LIB Use during your linking step programs

    For example, to build the code myprog.c with the PAPI version 5.6.0 library you would use:

    module load papi
    gcc -c myprog.c $PAPI_CFLAGS
    gcc -o myprog myprog.o $PAPI_LIB
    

    Batch Usage

    When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Usage on Pitzer

    Set-up

    Since PAPI version 5.2.0 is a System Install, no module is needed to run the application. To load a different version of the PAPI library, run the following command: module load papi.

    Building With PAPI

    To build the code myprog.c with the PAPI 5.2.0 library you would use:

    gcc -c myprog.c -lpapi
    gcc -o myprog myprog.o -lpapi
    

    For other versions, the PAPI library provides the following variables for use at build time:

    VARIABLE USE
    $PAPI_CFLAGS Use during your compilation step for C/C++ programs
    $PAPI_FFLAGS Use during your compilation step for FORTRAN programs 
    $PAPI_LIB Use during your linking step programs

    For example, to build the code myprog.c with the PAPI version 5.6.0 library you would use:

    module load papi
    gcc -c myprog.c $PAPI_CFLAGS
    gcc -o myprog myprog.o $PAPI_LIB
    

    Batch Usage

    When you log into pitzer.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.

    Further Reading

    Supercomputer: 

    PETSc

    PETSc is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations. It supports MPI, and GPUs through CUDA or OpenCL, as well as hybrid MPI-GPU parallelism.

    Availability and Restrictions

    Versions

    PETSc is available on Owens and Pitzer Clusters. The versions currently available at OSC are:

    Version Owens Pitzer
    3.12.5 X X
    3.14.6 X* X*
    3.19.3 X X

     

    The available libraries include f2cblaslapack, hypre, metis, mumps, parmetis, ptso, scalapack, and superlu for all installed versions.  Some installed versions include additional libraries.  You can use module spider petsc and module spider petsc/version to view available modules, supported libraries, and depedent programming environments for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    PETSc is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    UChicago Argonne, LLC and the PETSc Development Team, 2-clause BSD

    Usage

    Usage on Owens

    Set-up

    Initalizing the system for use of the PETSC library is dependent on the system you are using and the compiler you are using. A successful build of your program will depend on an understanding of what module fits your circumstances. To load a particular version, use  module load petsc/version. For example, use  module load petsc/3.12.5 to load PETSc version 3.12.5. You can use module spider petsc to view available modules.

    Usage on Pitzer

    Set-up

    Initalizing the system for use of the PETSC library is dependent on the system you are using and the compiler you are using. A successful build of your program will depend on an understanding of what module fits your circumstances. To load a particular version, use  module load petsc/version. For example, use  module load petsc/3.12.5 to load PETSc version 3.12.5. You can use module spider petsc to view available modules.

    Further Reading

    Tag: 
    Supercomputer: 
    Service: 

    PGI Compilers

    Fortran, C, and C++ compilers provided by the Portland Group.

    Availability and Restrictions

    PGI compilers are available on the Ruby, and Owens Clusters. Here are the versions currently available at OSC:

    Version Owens Pitzer Notes
    16.5.0 X    
    17.3.0 X    
    17.10.0 X    
    18.4 X X  
    20.1 X* X*  
    * : Current Default Version

    You can use module spider pgi to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    The PGI Compilers are available to all OSC users. If you would like to install the PGI compilers on your local computers, you may use the PGI Community Edition of the compiler for academic users for free at here. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Nvidia, Commercial

    Known Software Issues

    GNU Compatibility and Interoperability

    PGI compilers use the GNU tools on the clusters:  header files, libraries, and linker.  We call this PGI and GNU compatibility and interoperability in analogy with the Intel compilers' terminology.  Many users will not have to change this.  On OSC clusters the only mechanism of control is based on modules.  The most noticeable aspect of interoperability is that some parts of some C++ standards are available by default in various versions of the PGI compilers; other parts require you to load an extra module. For complete support of the C++11 and later standards with the PGI 20.1 and later compilers do this after the PGI compiler module is loaded:

    module load pgi-gcc-compatibility
    

    A symptom of broken compatibility is unusual or non sequitur compiler errors typically involving the C++ standard library especially with respect to templates, for example:

        In function `...':  undefined reference to `std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >:: ...'
    

    pgi/20.1: LLVM back-end for code generation

    Modern versions of the PGI compilers (version 19.1 and later) switched to using a LLVM-based back-end for code generation, instead of the PGI-proprietary code generator. For most users, this should not be a noticeable change. If you understand the change and need to use the PGI-proprietary back-end, you can use the -Mnollvm flag with the PGI compilers.

    pgi/20.1: disabling memory registration

    You may have a warning message when you run a MPI job with pgi/20.1 and mvapich2/2.3.3:

    WARNING: Error in initializing MVAPICH2 ptmalloc library.Continuing without InfiniBand registration cache support.

    Please read about the impact of disabling memory registration cache on application performance in the Mvapich2 2.3.3 user guide

    Note that pgi/20.1 works without the warning message with mvapich2/2.3.4.

    Usage on Owens

    Set-up

    To configure your environment for the default version of the PGI Compilers, use module load pgi. To configure your environment for a particular PGI compiler version, use module load pgi/version. For example, use  module load pgi/16.5.0 to load the PGI compiler version 16.5.0.

    Using the PGI Compilers

    Once the module is loaded, compiling with the PGI compilers requires understanding which binary should be used for which type of code. Specifically, use the pgcc binary for C codes, the pgc++ binary for C++ codes, the  pgf77 for Fortran 77 codes, and the pgf90 for Fortran 90 codes. Note that for PGI compilers version 20.1 and greater, the pgf77 binary is no longer provided; please use pgfortran for Fortran codes instead.

    See our compilation guide for a more detailed breakdown of the compilers.

    Building with the PGI Compilers

    The PGI compilers recognize the following command line options (this list is not exhaustive, for more information run man <compiler binary name>). In particular, if you are using a PGI compiler version 19.1 or later and need the PGI-proprietary back-end, then you can use the -Mnollvm flag (see the note at the top of this Usage section).

    COMPILER OPTION PURPOSE
    -c Compile into object code only; do not link
    -DMACRO[=value] Defines preprocessor macro MACRO with optional value (default value is 1)
    -g Enables debugging; disables optimization
    -I/directory/name Add /directory/name to the list of directories to be searched for #include files
    -L/directory/name Adds /directory/name to the list of directories to be searched for library files
    -lname Adds the library libname.a or libname.so to the list of libraries to be linked
    -o outfile Names the resulting executable outfile instead of a.out
    -UMACRO Removes definition of MACRO from preprocessor
    -O0 Disable optimization; default if -g is specified
    -O1 Light optimization; default if -g is not specified
    -O or -O2 Heavy optimization
    -O3 Aggressive optimization; may change numerical results
    -M[no]llvm Explicitly selects for the back-end between LLVM-based and PGI-proprietary code generation; only for versions 19.1 and greater; default is -Mllvm
    -Mipa Inline function expansion for calls to procedures defined in separate files; implies -O2
    -Munroll Loop unrolling; implies -O2
    -Mconcur Automatic parallelization; implies -O2
    -mp Enables translation of OpenMP directives

    Usage on Pitzer

    Set-up

    To configure your environment for the default version of the PGI Compilers, use module load pgi. To configure your environment for a particular PGI compiler version, use module load pgi/version. For example, use module load pgi/18.4 to load the PGI compiler version 18.4.

    Using the PGI Compilers

    Once the module is loaded, compiling with the PGI compilers requires understanding which binary should be used for which type of code. Specifically, use the pgcc binary for C codes, the pgc++ binary for C++ codes, the pgf77 for Fortran 77 codes, and the pgf90 for Fortran 90 codes. Note that for PGI compilers version 20.1 and greater, the pgf77 binary is no longer provided; please use pgfortran for Fortran codes instead.

    See our compilation guide for a more detailed breakdown of the compilers.

    Building with the PGI Compilers

    The PGI compilers recognize the following command line options (this list is not exhaustive, for more information run man <compiler binary name>). In particular, if you are using a PGI compiler version 19.1 or later and need the PGI-proprietary back-end, then you can use the -Mnollvm flag (see the note at the top of this Usage section).

    COMPILER OPTION PURPOSE
    -c Compile into object code only; do not link
    -DMACRO[=value] Defines preprocessor macro MACRO with optional value (default value is 1)
    -g Enables debugging; disables optimization
    -I/directory/name Add /directory/name to the list of directories to be searched for #include files
    -L/directory/name Adds /directory/name to the list of directories to be searched for library files
    -lname Adds the library libname.a or libname.so to the list of libraries to be linked
    -o outfile Names the resulting executable outfile instead of a.out
    -UMACRO Removes definition of MACRO from preprocessor
    -O0 Disable optimization; default if -g is specified
    -O1 Light optimization; default if -g is not specified
    -O or -O2 Heavy optimization
    -O3 Aggressive optimization; may change numerical results
    -M[no]llvm Explicitly selects for the back-end between LLVM-based and PGI-proprietary code generation; only for versions 19.1 and greater; default is -Mllvm
    -Mipa Inline function expansion for calls to procedures defined in separate files; implies -O2
    -Munroll Loop unrolling; implies -O2
    -Mconcur Automatic parallelization; implies -O2
    -mp Enables translation of OpenMP directives

    Further Reading

    See Also

    Supercomputer: 
    Service: 
    Technologies: 
    Fields of Science: 

    ParMETIS / METIS

    ParMETIS (Parallel Graph Partitioning and Fill-reducing Matrix Ordering) is an MPI-based parallel library that implements a variety of algorithms for partitioning unstructured graphs, meshes, and for computing fill-reducing orderings of sparse matrices. ParMETIS extends the functionality provided by METIS and includes routines that are especially suited for parallel AMR computations and large scale numerical simulations. The algorithms implemented in ParMETIS are based on the parallel multilevel k-way graph-partitioning, adaptive repartitioning, and parallel multi-constrained partitioning schemes developed in Karypis lab.

    METIS (Serial Graph Partitioning and Fill-reducing Matrix Ordering) is a set of serial programs for partitioning graphs, partitioning finite element meshes, and producing fill reducing orderings for sparse matrices. The algorithms implemented in METIS are based on the multilevel recursive-bisection, multilevel k-way, and multi-constraint partitioning schemes developed in Karypis lab.

    Availability and Restrictions

    Versions

    ParMETIS is available on Owens and Pitzer Clusters. The versions currently available at OSC are:

    Version Owens Pitzer
    4.0.3 X* X*
    * Current default version

    METIS is available on Owens, and Pitzer Clusters. The versions currently available at OSC are:

    Version Owens Pitzer
    5.1.0 X* X*
    * Current default version

    You can use module -r spider '.*metis.*'  to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    ParMETIS / METIS is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    University of Minnesota, Open source

    Usage

    Usage on Owens

    Set-up

    To load ParMETIS, run the following command: module load parmetis. To use the serial implementation, METIS, run the following command instead: module load metis. You can use module spider metis  and module spider parmetis to view available modules. Use module spider metis/version and module spider parmetis/version to check what modules should be loaded before load ParMETIS / METIS.

    Building With ParMETIS / METIS

    With the ParMETIS library loaded, the following environment variables will be available for use:

    Variable Use
    $PARMETIS_CFLAGS Use during your compilation step for C or C++ programs.
    $PARMETIS_LIBS

    Use when linking your program to ParMETIS.

     

    Similarly, when the METIS module is loaded, the following environment variables will be available:

    VARIABLE USE
    $METIS_CFLAGS Use during your compilation step for C programs.
    $METIS_LIBS Use when linking your program to METIS.

     

    For example, to build the code myprog.cc with the METIS library you would use:

    g++ -o myprog myprog.cc $METIS_LIBS
    

    Batch Usage

    When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.

    Non-interactive Batch Job (Serial Run)
    A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. You must load the ParMETIS / METIS module in your batch script before executing a program which is built with the ParMETIS / METIS library. Below is the example batch script that executes a program built with ParMETIS:
    #!/bin/bash
    #SBATCH --job-name=myprogJob
    #SBATCH --nodes=1 --ntasks-per-node=28
    module load gnu/4.8.5
    module load parmetis
    
    cp foo.dat $TMPDIR
    cd $TMPDIR
    myprog < foo.dat > foo.out
    cp foo.out $SLURM_SUBMIT_DIR
    

    Usage on Pitzer

    Set-up

    To load ParMETIS, run the following command: module load parmetis. To use the serial implementation, METIS, run the following command instead: module load metis .

    Building With ParMETIS / METIS

    With the ParMETIS library loaded, the following environment variables will be available for use:

    VARIABLE USE
    $PARMETIS_CFLAGS Use during your compilation step for C or C++ programs.
    $PARMETIS_LIBS

    Use when linking your program to ParMETIS.

     

    Similarly, when the METIS module is loaded, the following environment variables will be available:

    VARIABLE USE
    $METIS_CFLAGS Use during your compilation step for C programs.
    $METIS_LIBS Use when linking your program to METIS.

     

    For example, to build the code myprog.cc with the METIS library you would use:

    g++ -o myprog myprog.cc $METIS_LIBS

    Further Reading

    Tag: 
    Supercomputer: 
    Service: 

    ParaView

    ParaView is an open-source, multi-platform application designed to visualize data sets of size varying from small to very large. ParaView was developed to support distributed computational models for processing large data sets and to create an open, flexible user interface.

    Availability and Restrictions

    Versions

    ParaView is available on Owens and Pitzer Clusters. The versions currently available at OSC are:

    Version Owens Pitzer
    4.4.0 X  
    5.3.0 X  
    5.5.2 X X
    5.8.0 X* X*
    * Current default version

    You can use module spider paraview to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    ParaView is available for use by all OSC users.

    Publisher/Vendor/Repository and License Type

    https://www.paraview.org, Open source

    Usage

    Usage on Owens

    Set-up

    To load the default version of ParaView module, use module load paraview. To select a particular software version, use module load paraview/version. For example, use module load paraview/4.4.0 to load ParaView version 4.4.0. Following a successful loading of the ParaView module, you can access the ParaView program:
    paraview
    

    Usage on Pitzer

    Set-up

    To load the default version of ParaView module, use module load paraview.  Following a successful loading of the ParaView module, you can access the ParaView program:
    paraview

    Using ParaView with OSC OnDemand

    Using ParaView with OSC OnDemand requires VirtualGL. To begin, connect to OSC OnDemand and luanch a virtual desktop, either a Virtual Desktop Interface (VDI) or an Interactive HPC desktop. In the desktop open a terminal and load the ParaView and VirtualGL modules with module load paraview and module load virtualgl. You can then access the ParaView program with:

    vglrun paraview

    Note that using ParaView with OSC OnDemand does not work on all clusters.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    Perl

    Perl is a family of programming languages.

    Availability and Restrictions

    Versions

    A system version of Perl is available on all clusters. A Perl module is available on the Owens cluster. The following are the Perl versions currently available at OSC:

    Version Owens Pitzer Notes
    5.16.3 X# X#  
    5.26.1 X*   **See note below.
    5.26.3 X X cpanminus available and multi-threading support
    * Current module default version; # system version.
    ** There is always some version of Perl in the environment. If you want a specific version you should load the approriate module. If you don't have a Perl module loaded, you will get the system version.

    You can use  module spider perl to view available modules for a given cluster. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Perl is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    https://www.perl.org, Open source

    Usage

    Each cluster has a version of Perl that is part of the Operating System (OS). Some perl scripts (usually such files have a .pl extension) may require particular Perl Modules (PMs) (usually such files have a .pm extension). In some cases particular PMs are not part of the OS; in those cases, users should install those PMs; for background and a general recipe see HOWTO: Install your own Perl modules. In other cases a PM may be part of the OS but in an unknown location; in that case an error like this is emitted: Can't locate Shell.pm in @INC; and users can rectify this by locating the PM with the command locate Shell.pm and then adding that path to the environment variable PERL5LIB, e.g. in csh syntax: setenv PERL5LIB "/usr/share/perl5/CPAN:$PERL5LIB"

    Usage on Owens

    Set-up

    To configure your enviorment for use of a non system version of Perl, use command  module load perl. This will load the default version.

    Installing Perl Modules

    To install your own Perl modules locally, use CPAN Minus. Instructions for installing modules for system Perl are available here. Note that you do not need to load the cpanminus module if you are using a non-system Perl.

    Further Reading

    Supercomputer: 

    Picard

    Picard is a set of command line tools for manipulating high-throughput sequencing (HTS) data and formats such as SAM/BAM/CRAM and VCF.

    Availability and Restrictions

    Versions

    The following versions of Picard are available on OSC clusters:

    Version Owens Pitzer
    2.3.0 X*  
    2.18.17   X*
    * Current default version

    You can use  module spider picard to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Picard is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    The Broad Institute, Open source

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of Picard, run the following command: module load picard. The default version will be loaded. To select a particular Picard version, use module load picard/version. For example, use module load picard/2.3.0. to load Picard 2.3.0.

    Usage

    This software is a Java executable .jar file; thus, it is not possible to add to the PATH environment variable.

    From module load picard, a new environment variable, PICARD, will be set. Thus, users can use the software by running the following command:  java -jar $PICARD {other options}.

    Usage on Pitzer

    Set-up

    To configure your environment for use of Picard, run the following command: module load picard. The default version will be loaded. 

    Usage

    This software is a Java executable .jar file; thus, it is not possible to add to the PATH environment variable.

    From module load picard, a new environment variable, PICARD, will be set. Thus, users can use the software by running the following command:  java -jar $PICARD {other options}.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    PnetCDF

    PnetCDF is a library providing high-performance parallel I/O while still maintaining file-format compatibility with  Unidata's NetCDF, specifically the formats of CDF-1 and CDF-2. Although NetCDF supports parallel I/O starting from version 4, the files must be in HDF5 format. PnetCDF is currently the only choice for carrying out parallel I/O on files that are in classic formats (CDF-1 and 2). In addition, PnetCDF supports the CDF-5 file format, an extension of CDF-2, that supports more data types and allows users to define large dimensions, attributes, and variables (>2B elements).

    Availability and Restrictions

    Versions

    The following versions of PnetCDF are available at OSC:

    Version Owens Pitzer
    1.7.0  X*  
    1.8.1 X  
    1.10.0 X  
    1.12.1   X*
    * Current default version

    You can use module spider pnetcdf to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    PnetCDF is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Northwestern University and Argonne National Lab., Open source

    Usage

    Usage on Owens

    Set-up

    To initalize the system prior to using PnetCDF, run the following comand:

    module load pnetcdf
    

    Building With PnetDCF

    With the PnetCDF module loaded, the following environment variables will be available for use:

    VARIABLE USE
    $PNETCDF_CFLAGS Use during your compilation step for C or C++ programs.
    $PNETCDF_FFLAGS Use during your compilation step for Fortran programs.
    $PNETCDF_LIBS

    Use when linking your program to PnetCDF.

    $PNETCDF Path to the PnetCDF installation directory

     

    For example, to build the code myprog.c with the pnetcdf library you would use:

    mpicc -c $PNETCDF_CFLAGS myprog.c
    mpicc -o myprog myprog.o $PNETCDF_LIBS
    

    Batch Usage

    #!/bin/bash
    #SBATCH --job-name=AppNameJob
    #SBATCH --nodes=1 --ntasks-per-node=28
    #SBATCH --account <project-account>
    
    srun ./myprog

    Further Reading

    Supercomputer: 
    Service: 

    PyTorch

     PyTorch is an open source machine learning framework with GPU acceleration and deep neural networks that is based on the automatic differentiation in the Torch library of tensors.

    If you installed PyTorch-nightly on Linux via pip between December 25, 2022 and December 30, 2022, please uninstall it and torchtriton immediately, and use the latest nightly binaries (newer than Dec 30th 2022). See this post page from PyTorch for detailed information. 

    OSC does not provide general access to PyTorch.  However, we are available to assist with the configuration of local individual/research-group installations on all our clusters.  If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    https://pytorch.org, Open source.

    Installing PyTorch Locally

    Here is an example installation that was used in February 2022 to install a GPU enabled version compatible with the CUDA drivers on the clusters at that time:

    Load the correct python and cuda modules:

    module load miniconda3/4.10.3-py37  cuda/11.8.0
    module list
    
    Create a python environment to install pytorch into:
    conda create -n pytorch
    Activate the conda environment:
    source activate pytorch
    Install the specific version of pytorch:
    pip3 install -t ~/local/pytorch torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
    

    PyTorch is now installed into your $HOME/local directory using the local install directory hierarchy described here and can be tested via:

    module load miniconda3/4.10.3-py37 cuda/11.1.1 ; module list ; source activate pytorch
    python <<EOF
    import torch
        
    x = torch.rand(5, 3)
    print("torch.rand(5, 3) =", x)
        
    print( "Is cuda available =", torch.cuda.is_available() )
    exit
    EOF

    If testing for a GPU you will need to submit the above script as a batch job (make sure to request a GPU for the job, see Job Scripts for more info on requesting GPU)

    Please refer here if you want a different version of the Pytorch.

    Batch Usage

    Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations for Owens, and Scheduling Policies and Limits for more info.  In particular, Pytorch should be run on a GPU-enabled compute node.

    AN EXAMPLE BATCH SCRIPT TEMPLTE

    Below is an example batch script (job.sh) for using PyTorch (Slurm syntax).

    Contents of job.sh

    #!/bin/bash
    #SBATCH --job-name=pytorch
    #SBATCH --nodes=1 --ntasks-per-node=28 --gpus_per_node=1 --gpu_cmode=shared
    #SBATCH --time=30:00
    #SBATCH --account=yourprojectID
    
    cd $SLURM_SUBMIT_DIR
    
    module load miniconda3
    
    source activate your-local-python-environment-name
    
    python your-pytorch-script.py
    

    In order to run it via the batch system, submit the job.sh  file with the following command:

    sbatch job.sh
    

    GPU Usage

    • GPU Usage: PyTorch can be ran on a GPU for signifcant performace improvements. See HOWTO: Use GPU with Tensorflow and PyTorch
    • Horovod: If you are using PyTorch with a GPU you may want to also consider using Horovod. Horovod will take single-GPU training scripts and scale it to train across many GPUs in parallel.

     

    Further Reading

    PyTorch Homepage

    Supercomputer: 
    Service: 
    Technologies: 
    Fields of Science: 

    Python

    Python is a high-level, multi-paradigm programming language that is both easy to learn and useful in a wide variety of applications.  Python has a large standard library as well as a large number of third-party extensions, most of which are completely free and open source. 

    Availability and Restrictions

    Versions

    Python is available on Pitzer and Owens Clusters. The versions currently available at OSC are:

    Version Owens Pitzer Ascend Notes
    2.7 X      
    3.5  X      
    3.6 X      
    2.7-conda5.2 X X   Anaconda 5.2 distribution with Python 2.7 (conda 4.5.9 on Owens, conda 4.5.10 on Pitzer)**
    3.6-conda5.2 X* X*   Anaconda 5.2 distribution with Python 3.6 (conda 4.5.9 on Owens, conda 4.5.11 on Pitzer)**

    3.7-2019.10

    X X   Anaconda 2019.10 distribution with Python 3.7 (conda 4.7.12)**
    3.9-2022.05 X X   Anaconda 2022.05 distribution with Python 3.9 (conda 4.12.0)**
    3.9     X*  
    * Current default version
    ** The sufix '-condaX.X' and '-20XX.XX' indicates the version of the Anaconda distribution that has been installed. These distributions encompass conda as well as various other packages. For example, python/2.7-conda5.2 has been installed with Anaconda version 5.2 but uses conda version 4.5

    Some versions installed as an integrated package Anaconda

    You can use module spider python to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Python is available for use by all OSC users.

    Publisher/Vendor/Repository and License Type

    Python Software Foundation, Open source

    Usage

    Terminal

    Set-up

    To load the default version of Python module, use module load python . To select a particular software version, use module load python/version. For example, use module load python/3.5 to load Python version 3.5. After the module is loaded, you can run the interpreter by using the command python. To unload the Python 3.5 module, use the command module unload python/3.5 or simply module unload python

    Installed Modules

    We have installed a number of Python packages and tuned them for optimal performance on our systems.  When using the Anaconda distributions of python you can run conda list to view the installed packages.

    NOTE:
    • Due to architecture differences between our supercomputers, we recommend NOT installing your own packages in  ~/.local. Instead, you should install them in some other directory and set $PYTHONPATH in your default environment. For more information about installing your own Python modules, please see our HOWTO.
    Environments

    See the HOWTO section for more information on how to create and use python environements.

    Batch

    When you log into owens.osc.edu or pitzer.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info. 

    Here is an example batch job script

    #!/bin/bash
    #SBATCH --account <your_project_id>
    #SBATCH --job-name Python_ExampleJob
    #SBATCH --nodes=1 
    #SBATCH --time=00:01:00
    
    module load python/3.9-2022.05
    
    cp example.py $TMPDIR
    cd $TMPDIR
    
    python example.py
    
    cp -p * $SLURM_SUBMIT_DIR
        

    Utilizing Python Environments Within Batch Job:

    Important: When utilizing a python environment make sure to deactivate the environment before submitting the script or include  source deactivate in the batch script before activating the environment.
    Here is an example batch job script involving conda environment:
    #!/bin/bash
    #SBATCH --account <your_project_id>
    #SBATCH --job-name Python_ExampleJob
    #SBATCH --nodes=1
    #SBATCH --time=00:01:00
    
    # run to following to ensure local environment does not effect the batch job in unexpected ways
    
    source deactivate # deactivate copy of local python environment if job submitted from within environment
    module reset      # reset any loaded modules
    
    module load python/3.9-2022.05 # load python
    export PYTHONNOUSERSITE=True  #to avoid local python packages
    
    source activate MY_ENV  # activate conda environment 
    
    
    # Rest of script below
    
    cp example.py $TMPDIR
    
    cd $TMPDIR
    
    python example.py
    
    cp -p * $SLURM_SUBMIT_DIR

    Jupyter

    Launching Jupyter App

    Log on to https://ondemand.osc.edu/ with your OSC credentials. Choose Jupyter under the InteractiveApps option. ondemand_home.jpeg

     

     

    Provide job submission parameters then click Launch.

    jupyter_settings.png

     

     

    The next page shows the status of your job either as Queued or Starting or Running. Your job may sit in a queue for a few minutes depending on cluster load and resources requested.

    jupyter_queued.png

     

     

    When the job is ready, please click on Connect to Jupyter. This will now launch a Jupyter App.

    jupyter_running.png

    Jupyter App Usage 

    With the app open, you will be able to access your home directory on the left and all your available kernels will appear on the right. Any custom kernels created using HOWTO: create virtual environment with jupyter will also appear in this selection.

    Jupyter_main_menu.png

     

    With a file open you can easily switch between different kernels by clicking the kernel name in the top right.

    Kernel_swap.jpeg

    HOW-TOs

    Manage your Python packages

    We highly recommend creating a local environment to manage Python packages for your production and research tasks. Please refer to the following how-to pages for more details:

    Install packages for deep/machine learning

    Advanced topics

     

    Known Issues

    Incorrect MPI launcher and compiler wrappers with Conda environments

    Updated: March 2020
    Versions Affected: Python 2.7, 3.6 & Conda 5.2
    Users may encounter under-performing MPI jobs or failures of compiling MPI applications if you are using Conda from system. We found pre-installed mpich2 package in some Conda environments overrides default MPI path. The affected Conda packages are python/2.7-conda5.2 and python/3.6-conda5.2. If users experience these issues, please re-load MPI module, e.g. module load mvapich2 after setting up your Conda environment.

    Further reading

    Extensive documentation of the Python programming language and software downloads can be found at the Official Python Website.  

    See Also

    Supercomputer: 
    Service: 
    Fields of Science: 

    Q-Chem

    Q-Chem is a general purpose ab initio electronic structure program. Its latest version emphasizes Self-Consistent Field, especially Density Functional Theory, post Hartree-Fock, and innovative algorithms for fast performance and reduced scaling calculations. Geometry optimizations, vibrational frequencies, thermodynamic properties, and solution modeling are available. It performs reasonably well within its single reference paradigm on open shell and excited state systems. The Q-Chem Home Page has additional information.

    Availability and Restrictions

    Versions

    Q-Chem is available on the OSC clusters. These are the versions currently available:

    Version Owens Pitzer Notes
    6.1.0 X* X*  
    * Current default version
    ** Starting from version 5.2,  a flag '-mpi' is requied for running a MPI job, e.g. qchem -mpi -np 2. Without the flag,  OpenMP is the default parallelization.
     On October 12, 2023, we removed all previous versions of QChem, including 4.x, 5.x, and 6.0.x.Therefore, only qchem/6.1.0 is available. This is because of the academic license held by OSC, which permits only the usage of the latest available version. We recommend updating your job scripts if you currently use older versions of QChem: you can use either "module load qchem" or "module load qchem/6.1.0" in your script. Please be aware that moving forward, when a new version becomes available and is installed on OSC, the previous version will also be automatically removed. You can use module avail qchem to view available modules for a given machine. Feel free to contact OSC Help if you have any questions.

    Access

    Q-chem is available to academic OSC users only. Please review the Q-Chem license agreement carefully before use. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Q-Chem, Inc., Commercial

    Usage

    For MPI jobs that request multiple nodes the qchem script must be run from a globally accessible working directory, e.g., project or home directories

    Starting with 5.1, QCSCRATCH is automatically set to $TMPDIR which is removed upon the job is completed.  This is for saving scratch space and better job performance. If you need to save Q-Chem scratch files from a job and use them later, set QCSCRATCH to globally accessible working directory and QCLOCALSCR to $TMPDIR.

    Usage on Owens

    Set-up on Owens

    Q-Chem usage is controlled via modules. Load one of the Q-Chem modulefiles at the command line, in your shell initialization script, or in your batch scripts. To load the default version of Q-Chem module, use module load qchem. To select a particular software version, use module load qchem/version. For example, use module load qchem/4.4.1 to load Q-Chem version 4.4.1 on Owens. 

    Examples

    • The root of the Q-Chem directory tree is /usr/local/qchem/ 
    • Example Q-Chem input files are in the samples subdirectory

    Batch Usage on Owens

    When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.  Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.

    Interactive Batch Session

    For an interactive batch session one can run the following command:

    sinteractive -A <project-account> -N 1 -n 1 -t 00:20:00
    

    which requests one core (-N 1 -n 1), for a walltime of 20 minutes (-t 00:20:00). You may adjust the numbers per your need.

    Usage on Pitzer

    Set-up on Pitzer

    Q-Chem usage is controlled via modules. Load one of the Q-Chem modulefiles at the command line, in your shell initialization script, or in your batch scripts. To load the default version of Q-Chem module, use module load qchem.

    Examples

    • The root of the Q-Chem directory tree is /apps/qchem/ 
    • Example Q-Chem input files are in the samples subdirectory

    Batch Usage on Pitzer

    When you log into pitzer.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.  Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.

    Interactive Batch Session

    For an interactive batch session one can run the following command:

    sinteractive -A <project-account> -N 1 -n 1 -t 00:20:00

    which requests one node (-N 1) and one core (-n 1), for a walltime of 20 minutes (-t 00:20:00). You may adjust the numbers per your need.

    Further Reading

    Supercomputer: 
    Service: 

    QGIS

    QGIS is a user friendly Open Source Geographic Information System (GIS) licensed under the GNU General Public License. QGIS is an official project of the Open Source Geospatial Foundation (OSGeo). It runs on Linux, Unix, Mac OSX, Windows and Android and supports numerous vector, raster, and database formats and functionalities.

    Availability and Restrictions

    Versions

    The following versions of QGIS are available on OSC clusters:

    Version Owens Pitzer Note
    3.4.12 X X  
    3.6.14 X X    
    3.22.1 X* X*  
    3.22.8 X X SAGA 7.9.1 available
    * Current default version

    Access

    QGIS is available to all OSC users via OnDemand QGIS app. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    GNU General Public License.

    Further Reading

    Supercomputer: 
    Service: 

    Quantum Espresso

    Quantum ESPRESSO (QE) is a program package for ab-initio molecular dynamics (MD) simulations and electronic structure calculations.  It is based on density-functional theory, plane waves, and pseudopotentials.

    Availability and Restrictions

    Versions

    The following versions are available on OSC systems:

    Version Owens Pitzer Note
    5.2.1 X    
    6.1 X    
    6.2.1 X    
    6.3 X X  
    6.5 X* X*  
    6.7 X X thermo_pw 1.5 available 
    * Current default version

    You can use  module spider espresso to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Quantum ESPRESSO is open source and available to all OSC users.  We recommend that Owens be used. If you have any questions, please contact OSC Help.

     

    Publisher/Vendor/Repository and License Type

    http://www.quantum-espresso.org, Open source

    Usage

    Set-up

    You can configure your environment for the usage of Quantum ESPRESSO by running the following command:

    module load espresso
    

    For  QE 6.2.1 and previous versions on Owens, you need to load au2016 by module load modules/au2016 before you load espresso.

    In the case of multiple compiled versions load the appropriate compiler first, e.g., on Owens to select the most recently compiled QE 6.1 version use the following commands:

    module load intel/17.0.2
    module load espresso/6.1
    

    Batch Usage

    Sample batch scripts and input files are available here:

    ~srb/workshops/compchem/espresso/

    Further Reading

    See Also

    Supercomputer: 
    Service: 

    R and Rstudio

    R is a language and environment for statistical computing and graphics. It is an integrated suite of software facilities for data manipulation, calculation, and graphical display. It includes

    • an effective data handling and storage facility,
    • a suite of operators for calculations on arrays, in particular matrices,
    • a large, coherent, integrated collection of intermediate tools for data analysis,
    • graphical facilities for data analysis and display either on-screen or on hardcopy, and
    • a well-developed, simple and effective programming language which includes conditionals, loops, user-defined recursive functions and input, and output facilities

    More information can be found here.

    Availability and Restrictions

    Versions

    The following versions of R are available on OSC systems: 

    Version Owens Pitzer Ascend
    3.3.2 X    
    3.4.0 X    
    3.4.2 X    
    3.5.0# X*    
    3.5.1 X X*  
    3.5.2   X  
    3.6.0 or 3.6.0-gnu7.3 X X  
    3.6.1 or 3.6.1-gnu9.1 X    
    3.6.3 or 3.6.3-gnu9.1 X X  
    4.0.2 or 4.0.2-gnu9.1 X X  
    4.1.0 or 4.1.0-gnu9.1** X X  
    4.2.1 or 4.2.1-gnu11.2 X X X*
    4.3.0 or 4.3.0-gnu11.2 X X X

     

    * Current default version R/3.5.0 is available for both intel/16 and intel/18, but they may differ for R packages under them. R/3.6.0 and later versions are compiled with gnu and mkl. Loading R/3.6.X modules require dependencies to be preloaded whereas R/3.6.X-gnuY modules will automatically load required dependencies.
    ** The user state directory (session data)  is stored at ~/.local/share/rstudio for the latest RStudio that we have deployed with R/4.1.0. It is located at ~/.rstudio for older versions.  Users would need to delete session data from ~/.local/share/rstudio for R/4.1.0 and ~/.rstudio for older versions to clear workspace history.

    Known Issue

    There's a known issue loading modules in RStudio's environment after changing versions or clusters.

    If you have issues using modules in the RConsole - try these remedies

    • restarting the terminal
    • restarting the RConole
    • logging out of the RStudio session and logging back in.
    • remove your ~/.local/share/rstudio

    You can use module avail R to view available modules and module spider R/version to show how to load the module for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    R is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    R Foundation, Open source

    Usage

    R software can be launched two different ways; through Rstudio on OSC OnDemand and through the terminal.

    Rstudio

    In order to access Rstudio and OSC R workshop materials, please visit here.

    Terminal Acess

    In order to configure your environment for R, run the following command:

    module load R/version
    #for example,
    module load R/3.6.3-gnu9.1
    

    R/3.6.0 and onwards versions use gnu compiler and intel mkl libraries for performance improvements. Loading R/3.6.X modules require dependencies to be preloaded as below whereas R/3.6.X-gnuY modules will automatically load required dependencies.

    Using R

    Once your environment is configured, R can be started simply by entering the following command:

    R

    For a listing of command line options, run:

    R --help

    Running R interactively on a login node for extended computations is not recommended and may violate OSC usage policy. Users can either request compute nodes to run R interactively or run R in batch.

    Running R interactively on terminal:

    Request compute node or nodes if running parallel R as,

    sinteractive -A <project-account> -N 1 -n 28 -t 01:00:00 

    When the compute node is ready, launch R by loading modules

    module load R/3.6.3-gnu9.1
    R

    Batch Usage

     Reference the example batch script below. This script requests one full node on the Owens cluster for 1 hour of wall time.

    #!/bin/bash
    #SBATCH --job-name R_ExampleJob
    #SBATCH --nodes=1 --ntasks-per-node=48
    #SBATCH --time=01:00:00
    #SBATCH --account <your_project_id>
        
    module load R/3.6.3-gnu9.1
        
    cp in.dat test.R $TMPDIR
    cd $TMPDIR
        
    R CMD BATCH test.R test.Rout
        
    cp test.Rout $SLURM_SUBMIT_DIR

    HOWTO: Install Local R Packages

    R comes with a single library  $R_HOME/library which contains the standard and recommended packages. This is usually in a system location. On Owens, it is  /usr/local/R/gnu/9.1/3.6.3/lib64/R  for R/3.6.3. OSC also installs popular R packages into the site located at /usr/local/R/gnu/9.1/3.6.3/site/pkgs for R/3.6.3 on Owens. 

    Users can check the library path as follows after launching an R session;

    > .libPaths()
    [1] "/users/PZS0680/soottikkal/R/x86_64-pc-linux-gnu-library/3.6"
    [2] "/usr/local/R/gnu/9.1/3.6.3/site/pkgs"
    [3] "/usr/local/R/gnu/9.1/3.6.3/lib64/R/library"
    

    Users can check the list of available packages as follows;

    >installed.packages()

    To install local R packages, use install.package() command. For example,

    >install.packages("lattice")

     For the first time local installation, it will give a warning as follows:

    Installing package into ‘/usr/local/R/gnu/9.1/3.6.3/site/pkgs’
    (as ‘lib’ is unspecified)
    Warning in install.packages("lattice") :
    'lib = "/usr/local/R/gnu/9.1/3.6.3/site/pkgs"' is not writable
    Would you like to use a personal library instead? (yes/No/cancel)

    Answer , and it will create the directory and install the package there.

    If you are using R older than 3.6, and if you have errors similar to

    /opt/intel/18.0.3/compilers_and_libraries_2018.3.222/linux/compiler/include/complex(310): error #308: member "std::complex::_M_value" (declared at line 1346 of "/apps/gnu/7.3.0/include/c++/7.3.0/complex") is inaccessible
    return __x / __y._M_value; 

    then create a Makevars file in your project path and add the following command to it:

    CXXFLAGS = -diag-disable 308

     Set the R_MAKEVARS_USER to the custom Makevars created under your project path as follows

    export R_MAKEVARS_USER="/your_project_path/Makevars" 

    Installing Packages from GitHub

    Users can install R packages directly from Github using devtools package as follows

    >install.packages("devtools")
    >devtools::install_github("author/package")

    Installing Packages from Bioconductor

    Users can install R packages directly from Bioconductor using BiocManager.

    >install.packages("BiocManager")
    >BiocManager::install(c("GenomicRanges", "Organism.dplyr"))
        

    R packages with external dependencies 

    When installing R packages with external dependencies, users may need to import appropriate libraries into R.  Sometimes using a gnu version of R can alleviate problems, e.g., try R/4.3.0-gnu11.2 if R/4.3.0 fails.  One of the frequently requested R packages is sf which needs geos, gdal and PROJ libraries. We have a few versions of those packages installed and they can be loaded as modules.  Another relativey common external dependency is gsl  use, e.g.:  module spider gsl, to find the available versions of such dependencies.

    Here is an example of how to install R package sf.

    module load geos/3.9.1 proj/8.1.0 gdal/3.3.1
    module load R/4.0.2-gnu9.1
    R
    >install.packages("sf")

    Now you can install other packages that depend on sf normally. Please note that if you get an error indicating the sqlite version is outdated, you can load its module along with geos, proj and gdal modules: module load sqlite/3.26.0

    This is an example of the stars package installation, which has a dependency of sf package.

    >install.packages("stars")
    >library(stars) 

     

    When modules of external libs are not available, users can install those and link libraries to the R environment.  Here is an example of how to install the sf package on Owens without modules.

    Please note that the library paths will change to /apps/ on Pitzer instead of /usr/local/ as on Owens.
    Please note that if you get an error indicating the sqlite version is outdated, you can load its module before proceeding with the installation: module load sqlite/3.26.0
    >old_ld_path <- Sys.getenv("LD_LIBRARY_PATH")
    >Sys.setenv(LD_LIBRARY_PATH = paste(old_ld_path, "/usr/local/gdal/3.3.1/lib", "/usr/local/proj/8.1.0/lib","/usr/local/geos/3.9.1/",sep=":"))
    
    >Sys.setenv("PKG_CONFIG_PATH"="/usr/local/proj/8.1.0/lib/pkgconfig")
    >Sys.setenv("GDAL_DATA"="/usr/local/gdal/3.3.1/share/gdal")
    
    >install.packages("sf", configure.args=c("--with-gdal-config=/usr/local/gdal/3.3.1/bin/gdal-config","--with-proj-include=/usr/local/proj/8.1.0/include","--with-proj-lib=/usr/local/proj/8.1.0/lib","--with-geos-config=/usr/local/geos/3.9.1/bin/geos-config"),INSTALL_opts="--no-test-load")
    
    >dyn.load("/usr/local/gdal/3.3.1/lib/libgdal.so")
    >dyn.load("/usr/local/geos/3.9.1/lib/libgeos_c.so", local=FALSE)
    >library(sf)

    Please note that every time before loading sf package, you have to execute the dyn.load of both libraries listed above.  In addition, the first time you install an external package you should answer yes to using and creating a personal library, e.g.:

    'lib = "/usr/local/R/gnu/9.1/4.0.2/site/pkgs"' is not writable. Would you like to use a personal library instead? (yes/No/cancel) yes.  Would you like to create a personal library '~/R/x86_64-pc-linux-gnu-library/4.0' to install packages into?  (yes/No/cancel) yes. 

    You can install other packages that depend on sf as follows. This is an example of terra package installation.

    >install.packages("terra", configure.args=c("--with-gdal-config=/usr/local/gdal/3.3.1/bin/gdal-config","--with-proj-include=/usr/local/proj/8.1.0/include","--with-proj-lib=/usr/local/proj/8.1.0/lib","--with-geos-config=/usr/local/geos/3.9.1/bin/geos-config"),INSTALL_opts="--no-test-load")
    >library(terra)
    


    Import modules in R

    Alternatively you can load modules in R for those external depedencies if they are available on system

    > source(file.path(Sys.getenv("LMOD_PKG"), "init/R"))
    > module("load", "geos")
    

    You can check if an external pacakge is available

    > module("avail", "geos")
    

    renv: Package Manager

    if you are using R for multiple projects, OSC recommendsrenv, an R dependency manager for R package management. Please see more information here.

    The renv package helps you create reproducible environments for your R projects. Use renv to make your R projects more:

    • Isolated: Each project gets its own library of R packages, so you can feel free to upgrade and change package versions in one project without worrying about breaking your other projects.

    • Portable: Because renv captures the state of your R packages within a lockfile, you can more easily share and collaborate on projects with others, and ensure that everyone is working from a common base.

    • Reproducible: Use renv::snapshot() to save the state of your R library to the lockfile renv.lock. You can later use renv::restore() to restore your R library exactly as specified in the lockfile.

    Users can install renv package as follows;

    >install.packages("renv")

    The core essence of the renv workflow is fairly simple:

    1. After launching R, go to your project directory using R command setwd and initiate renv:

      setwd("your/project/path")
      renv::init()

      This function forks the state of your default R libraries into a project-local library. A project-local .Rprofile is created (or amended), which is then used by new R sessions to automatically initialize renv and ensure the project-local library is used. 

      Work in your project as usual, installing and upgrading R packages as required as your project evolves.

    2. Use renv::snapshot() to save the state of your project library. The project state will be serialized into a file called renv.lock under your project path.

    3. Use renv::restore() to restore your project library from the state of your previously-created lockfile renv.lock.

    In short: use renv::init() to initialize your project library, and use renv::snapshot() / renv::restore() to save and load the state of your library.

    After your project has been initialized, you can work within the project as before, but without fear that installing or upgrading packages could affect other projects on your system.

    Global Cache

    One of renv’s primary features is the use of a global package cache, which is shared across all projects using renvWhen using renv the packages from various projects are installed to the global cache. The individual project library is instead formed as a directory of symlinks  into the renv global package cache. Hence, while each renv project is isolated from other projects on your system, they can still re-use the same installed packages as required. By default, global Cache of renv is located ~/.local/share/renvUser can change the global cache location using RENV_PATHS_CACHE variable. Please see more information here.

    Please note that renv does not load packages from site location (add-on packages installed by OSC) to the rsession. Users will have access to the base R packages only when using renv. All other packages required for the project should be installed by the user.

    Version Control with renv

    If you would like to version control your project, you can utilize git versioning of renv.lock file. First, initiate git for your project directory on a terminal

    git init

    Continue working on your R project by launching R, installing packages, saving snapshot using renv::snapshot()command. Please note that renv::snapshot() will only save packages that are used in the current project. To capture all packages within the active R libraries in the lockfile, please see the type option. 

    >renv::snapshot(type="simple")
    

    If you’re using a version control system with your project, then as you call renv::snapshot() and later commit new lockfiles to your repository, you may find it necessary later to recover older versions of your lockfiles. renv provides the functions renv::history()to list previous revisions of your lockfile, and renv::revert() to recover these older lockfiles.

    If you are using renvpackage for the first time, it is recommended that you check R startup files in your $HOME such as .Rprofile and .Renviron and remove any project-specific settings from these files. Please also make sure you do not have any project-specific settings in ~/.R/Makevars.

    A Simple Example

    First, you need to load the module for R and fire up R session

    module load R/3.6.3-gnu9.1
    R

    Then set the working directory and initiate renv

    setwd("your/project/path")
    renv::init()

    Let's install a package called  lattice,  and save the snapshot to the renv.lock

    renv::install("lattice")
    renv::snapshot(type="simple")

    The latticepackage will be installed in global cache of renv and symlink will be saved in renv under the project path.

    Restore a Project

    Use renv::restore() to restore a project's dependencies from a lockfile, as previously generated by snapshot(). Let's remove the lattice package.

    renv::remove("lattice")

    Now let's restore the project from the previously saved snapshot so that the lattice package is restored.

    renv::restore()
    library(lattice)

    Collaborating with renv

    When using renv, the packages used in your project will be recorded into a lockfile, renv.lock. Because renv.lock records the exact versions of R packages used within a project, if you share that file with your collaborators, they will be able to use renv::restore() to install exactly the same R packages as recorded in the lockfile. Please find more information here.

    Parallel R

    R provides a number of methods for parallel processing of the code. Multiple cores and nodes available on OSC clusters can be effectively deployed to run many computations in R faster through parallelism.

    Consider this example, where we use a function that will generate values sampled from a normal distribution and sum the vector of those results; every call to the function is a separate simulation.

        myProc <- function(size=1000000) {
          # Load a large vector
          vec <- rnorm(size)
          # Now sum the vec values
          return(sum(vec))
        }

    Serial execution with loop

    Let’s first create a serial version of R code to run myProc() 100x on Owens

        tick <- proc.time()
        for(i in 1:100) {
          myProc()
        }
        tock <- proc.time() - tick
        tock
        ##    user  system elapsed
        ##   6.437   0.199   6.637

    Here, we execute each trial sequentially, utilizing only one of our 28 processors on this machine. In order to apply parallelism, we need to create multiple tasks that can be dispatched to different cores. Using apply() family of R function, we can create multiple tasks. We can rewrite the above code  to use apply(), which applies a function to each of the members of a list (in this case the trials we want to run):

        tick <- proc.time()
        result <- lapply(1:100, function(i) myProc())
        tock <-proc.time() - tick
        tock
        ##    user  system elapsed
        ##   6.346   0.152   6.498

    parallel package

    The  parallellibrary can be used to dispatch tasks to different cores. The parallel::mclapply function can distributes the tasks to multiple processors.

        library(parallel)
        cores <- system("nproc", intern=TRUE)
        tick <- proc.time()
        result <- mclapply(1:100, function(i) myProc(), mc.cores=cores)
        tock <- proc.time() - tick
        tock
        ##    user  system elapsed
        ##   8.653   0.457   0.382

    foreach package

    The foreach package provides a  looping construct for executing R code repeatedly. It uses the sequential %do% operator to indicate an expression to run.

        library(foreach)
        tick <- proc.time()
        result <-foreach(i=1:100) %do% {
           myProc()
        }
        tock <- proc.time() - tick
        tock
        ##    user  system elapsed
        ##   6.420   0.018   6.439

    doParallel package

    foreach supports a parallelizable operator %dopar% from the doParallel package. This allows each iteration through the loop to use different cores.

        library(doParallel, quiet = TRUE)
        library(foreach)
        cl <- makeCluster(28)
        registerDoParallel(cl)
        
        tick <- proc.time()
        result <- foreach(i=1:100, .combine=c) %dopar% {
            myProc()
        }
        tock <- proc.time() - tick
        tock
        invisible(stopCluster(cl))
        detachDoParallel()
        
        ##    user  system elapsed
        ##   0.085   0.013   0.446
        

    Rmpi package

    Rmpi package allows to parallelize R code across multiple nodes. Rmpi provides an interface necessary to use MPI for parallel computing using R. This allows each iteration through the loop to use different cores on different nodes. Rmpijobs cannot be run with RStudio at OSC currently, instead users can submit Rmpi jobs through terminal App. R uses openmpi as MPI interface therefor users would need to load openmpi module before installing or using Rmpi. Rmpi is installed at central location for R versions prior to 4.2.1. If it is not availbe, users can install it as follows

    Rmpi Installation

       # Get source code of desired version of RMpi
    wget https://cran.r-project.org/src/contrib/Rmpi_0.6-9.2.tar.gz
    
    # Load modules
    ml openmpi/1.10.7 R/4.2.1-gnu11.2
    
    # Install RMpi
    R CMD INSTALL --configure-vars="CPPFLAGS=-I$MPI_HOME/include LDFLAGS='-L$MPI_HOME/lib'" --configure-args="--with-Rmpi-include=$MPI_HOME/include --with-Rmpi-libpath=$MPI_HOME/lib --with-Rmpi-type=OPENMPI" Rmpi_0.6-9.2.tar.gz
    
    # Test loading
    library(Rmpi)
    

       

    Please make sure that $MPI_HOME is defined after loading openmpi module. Newer versions of openmpi module has $OPENMPI_HOME instead of $MPI_HOME. So you would need to replace $MPI_HOME with $OPENMPI_HOME for those versions of openmpi.

    Above example code can be rewritten to utilize multiple nodes with Rmpias follows;

        library(Rmpi)
        library(snow)
        workers <- as.numeric(Sys.getenv(c("PBS_NP")))-1
        cl <- makeCluster(workers, type="MPI") # MPI tasks to use
        clusterExport(cl, list('myProc'))
        tick <- proc.time()
        result <- clusterApply(cl, 1:100, function(i) myProc())
        write.table(result, file = "foo.csv", sep = ",")
        tock <- proc.time() - tick
        tock

    Batch script for job submission is as follows;

        #!/bin/bash
        #SBATCH --time=10:00
        #SBATCH --nodes=2 --ntasks-per-node=28
        #SBATCH --account=<project-account>
        
        module load R/3.6.3-gnu9.1 openmpi/1.10.7
        
        # parallel R: submit job with one MPI master
        mpirun -np 1 R --slave < Rmpi.R

    pbdMPI package

     pbdMPI is an improved version of RMpi package that provides efficient interface to MPI by utilizing S4 classes and methods with a focus on Single Program/Multiple Data ('SPMD') parallel programming style, which is intended for batch parallel execution.

    Installation of pbdMPI

    Users can download latest version of pbdMPI from CRAN https://cran.r-project.org/web/packages/pbdMPI/index.html and install it as follows,

    wget https://cran.r-project.org/src/contrib/pbdMPI_0.4-6.tar.gz
    ml R/4.3.0-gnu11.2
    ml openmpi/4.1.4-hpcx
    R CMD INSTALL pbdMPI_0.4-6.tar.gz
    

         Examples

      Here are few resources that demonstrate how to use pbdMPI

      https://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=BD40B7B615DF79...

      http://hpcf-files.umbc.edu/research/papers/pbdRtara2013.pdf

    R Batchtools

    The R package, batchtools provides a parallel implementation of Map for high-performance computing systems managed by schedulers Slurm on OSC system. Please find more info here https://github.com/mllg/batchtools.

    Users would need two files slurm.tmpl and .batch.conf.R

    Slurm.tmpl is provided below. Please change "your project_ID".

        #!/bin/bash -l
        ## Job Resource Interface Definition
        ## ntasks [integer(1)]:       Number of required tasks,
        ##                            Set larger than 1 if you want to further parallelize
        ##                            with MPI within your job.
        ## ncpus [integer(1)]:        Number of required cpus per task,
        ##                            Set larger than 1 if you want to further parallelize
        ##                            with multicore/parallel within each task.
        ## walltime [integer(1)]:     Walltime for this job, in seconds.
        ##                            Must be at least 60 seconds.
        ## memory   [integer(1)]:     Memory in megabytes for each cpu.
        ##                            Must be at least 100 (when I tried lower values my
        ##                            jobs did not start at all).
        ## Default resources can be set in your .batchtools.conf.R by defining the variable
        ## 'default.resources' as a named list.
        
        <%
        # relative paths are not handled well by Slurm
        log.file = fs::path_expand(log.file)
        -%>
        
        #SBATCH --job-name=<%= job.name %>
        #SBATCH --output=<%= log.file %>
        #SBATCH --error=<%= log.file %>
        #SBATCH --time=<%= ceiling(resources$walltime / 60) %>
        #SBATCH --ntasks=1
        #SBATCH --cpus-per-task=<%= resources$ncpus %>
        #SBATCH --mem-per-cpu=<%= resources$memory %>
        #SBATCH --account=your_project_id
        <%= if (!is.null(resources$partition)) sprintf(paste0("#SBATCH --partition='", resources$partition, "'")) %>
        <%= if (array.jobs) sprintf("#SBATCH --array=1-%i", nrow(jobs)) else "" %>
        
        
        ## Initialize work environment like
        ## source /etc/profile
        ## module add ...
        
        module add  R/4.0.2-gnu9.1
        
        ## Export value of DEBUGME environemnt var to slave
        export DEBUGME=<%= Sys.getenv("DEBUGME") %>
        <%= sprintf("export OMP_NUM_THREADS=%i", resources$omp.threads) -%>
        <%= sprintf("export OPENBLAS_NUM_THREADS=%i", resources$blas.threads) -%>
        <%= sprintf("export MKL_NUM_THREADS=%i", resources$blas.threads) -%>
        
        
        ## Run R:
        ## we merge R output with stdout from SLURM, which gets then logged via --output option
        
        Rscript -e 'batchtools::doJobCollection("<%= uri %>")'

    .batch.conf.R is provided below.

        cluster.functions = makeClusterFunctionsSlurm(template="path/to/slurm.tmpl")

    A test example is provided below. Assuming the current working directory has both slurm.tmpl and .batch.conf.R files.

        ml R/4.0.2-gnu9.1
        R
        
        >install.packages("batchtools")
        >library(batchtools)
        >myFct <- function(x) {
        result <- cbind(iris[x, 1:4,],
        Node=system("hostname", intern=TRUE),
        Rversion=paste(R.Version()[6:7], collapse="."))}
        
        >reg <- makeRegistry(file.dir="myregdir", conf.file=".batchtools.conf.R")
        >Njobs <- 1:4 # Define number of jobs (here 4)
        >ids <- batchMap(fun=myFct, x=Njobs)
        >done <- submitJobs(ids, reg=reg, resources=list( walltime=60, ntasks=1, ncpus=1, memory=1024))
        >waitForJobs()
        >getStatus() # Summarize job
        
        

    Profiling R code

    Profiling R code helps to optimize the code by identifying bottlenecks and improve its performance. There are a number of tools that can be used to profile R code.

    Grafana:

    OSC jobs can be monitored for CPU and memory usage using grafana.  If your job is in running status, you can get grafana metrics as follows. After log in to OSC OnDemand, select Jobs from the top tabs, then select Active Jobs and then Job that you are interested to profile. You will see grafana metrics at the bottom of the page and you can click on detailed metrics to access more information about your job at grafana.

    Screen Shot of grafana metrics

    Rprof:

    R’s built-in tool,Rprof function can be used to profile R expressions and the summaryRprof function to summarize the result. More information can be found here.

    Here is an example of profiling R code with Rprofe for data analysis on Faithful data.

    Rprof("Rprof-out.prof",memory.profiling=TRUE, line.profiling=TRUE)
    data(faithful)
    summary(faithful)
    plot(faithful)
    Rprof(NULL)    

    To analyze profiled data, runsummaryRprof on Rprof-out.prof

    summaryRprof("Rprof-out.prof")

    You can read more about summaryRprofhere

    Profvis:

     It provides an interactive graphical interface for visualizing data from Rprof.

    library(profvis)
    profvis({
        data(faithful)
        summary(faithful)
        plot(faithful)
    },prof_output="profvis-out.prof")

    If you are running the R code on Rstudio, it will automatically open up the visualization for the profiled data. More info can be found here.

    Using Rstudio for classroom  

    OSC provides an isolated and custom R environment for each classroom project that requires Rstudio. More information can be found here.

    Further Reading

    Troubleshooting issues  

    1. If you're encountering difficulties launching the RStudio App on-demand, it's recommended to review your ~/.bashrc file for any conda/python configurations. Consider commenting out these configurations and attempting to launch the app again.

    2. If your R session is taking too long to initialize, it might be due to issues from a previous session. To resolve this, consider restoring R to a fresh session by removing the previous state stored at

    ~/.local/share/rstudio (~/.rstudio for <R/4.1)

    mv ~/.local/share/rstudio ~/.local/share/rstudio.backup

     

    Further Reading

    See Also

    Supercomputer: 
    Service: 
    Fields of Science: 

    RELION

    RELION (REgularised LIkelihood OptimisatioN) is a stand-alone computer program for the refinement of 3D reconstructions or 2D class averages in electron cryo-microscopy. 

    We have identified some issues with the RELION installations. For more details, please refere the "Known Issues" section below.

    To address the issues, we have decided to deprecate affected versions, including: relion/3.1, relion/3.1-gpu, relion/3.1.3, relion/4.0.0, and relion/4.0b on both Owens and Pitzer. Additionally, we will deprecate other versions that are rarely used to improve maintenance. These versions include relion2/2.0 and relion/3.0.4.

    The deprecation action was done on January 23, 2024. To migrate your jobs, please refer to the "Versions" table below for available versions. If you require any assistance, please contact OSC help.

    Availability and Restrictions

    Versions

    RELION is available on the Owens cluster. The versions currently available at OSC are:

    Version Owens Pitzer Ascend Note
    3.1-cuda10.1   X   Built with CUDA 10.1
    4.0-cuda10.1   X   RELION 4.0 beta2
    Built with CUDA 10.1
    4.0.1 X X X Built with CUDA 10.2 (11.8 for Ascend)
    5.0b   X   Built with CUDA 11.8
    * Current Default Version

    You can use module spider relion  to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Available third-party packages

    Cluster RELION CTFFIND MotionCor2 GCTF ResMap Unblur & Summovie
    Pitzer 4.0.1, 5.0b 4.1.14 1.4.4 1.18 1.1.4 1.0.2
    Ascend 4.0.1 4.1.14 1.4.5 1.18 1.1.4 1.0.2

    Access

    RELION is available to all OSC users.

    Publisher/Vendor/Repository and License Type

    MRC Lab of Molecular Biology, Open source

    Usage

    Usage on Owens

    Set-up

    To set up the environment for RELION on the Owens cluster, use the command:

    module load relion/version
    

    where version is chosen from the available versions (omitting the version will load the default version).

    Usage on Pitzer

    Set-up

    To set up the environment for RELION on the Pitzer cluster, use the command:

    module load relion/version
    

    where version is chosen from the available versions (omitting the version will load the default version).

    Known issues

    Hybrid MPI+OpenMP job Hangs on multiple nodes

    Update: January 2024
    Version: All Intel+MVAPICH2 versions

    Hybrid MPI+OpenMP jobs utilizing programs built with the Intel compiler and MVAPICH2 may experience hangs when running on multiple nodes. This issue is attributed to a known problem in MVAPICH2 built with the Intel compiler stack.

    Resolution

    All versions of Intel+MVAPICH2 have been removed to address this issue.

    Poor performance with hybrid MPI+OpenMPI jobs and more than 4 MPI Tasks on multiple nodes

    Update: January 2024
    Version: Prior to 5

    RELION versions prior to 5 may exhibit poor performance in hybrid MPI+OpenMP jobs when the number of MPI tasks exceeds 4 on multiple nodes.

    Resolution

    If possible, limit the number of MPI tasks to 4 or less for optimal performance. Consider using RELION version 5 or later, as newer versions may include optimizations and improvements that address this performance issue.

    Further Reading

    Supercomputer: 

    RNA-SeQC

    RNA-SeQC is a java program which computes a series of quality control metrics for RNA-seq data. The input can be one or more BAM files. The output consists of HTML reports and tab delimited files of metrics data. This program can be valuable for comparing sequencing quality across different samples or experiments to evaluate different experimental parameters. It can also be run on individual samples as a means of quality control before continuing with downstream analysis.

    Availability and Restrictions

    Versions

    The following versions of RNA-SeQC are available on OSC clusters:

    Version Owens
    1.1.8 X*
    * Current default version

    You can use module spider rna-seqc to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    RNA-SeQC is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    The Broad Insitute, Open source

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of RNA-SeQC, run the following command:  module load rna-seqc. The default version will be loaded. To select a particular RNA-SeQC version, use module load rna-seqc/version. For example, use module load rna-seqc/1.1.8 to load RNA-SeQC 1.1.8.

    Usage

    This software is a Java executable .jar file; thus, it is not possible to add to the PATH environment variable.

    From module load rna-seqc, a new environment variable, RNA_SEQC, will be set. Thus, users can use the software by running the following command: java -jar $RNA_SEQC {other options}.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    Rosetta

    Rosetta is a software suite that includes algorithms for computational modeling and analysis of protein structures. It has enabled notable scientific advances in computational biology, including de novo protein design, enzyme design, ligand docking, and structure prediction of biological macromolecules and macromolecular complexes.

     

    Availability and Restrictions

    Versions

    The Rosetta suite is available on Owens and Pitzer. The versions currently available at OSC are:

     

    Version Owens Pitzer
    3.10 X X
    3.12 X* X*
    * Current default version

    You can use  module spider rosetta to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access for Academic Users 

    Rosetta is available to academic OSC users. Please review the license agreement carefully before use. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Rosetta, Non-Commercial

    Usage

    Usage on Owens and Pitzer

    To set up your environment for rosetta load one of its module files:

    ​​module load rosetta/3.12

    Here is an example batch script that uses Rosetta Abinitio Relax application:

    #!/bin/bash
    #SBATCH --job-name="rosetta_abinitio_relax_job"
    #SBATCH --ntasks=1
    #SBATCH --time=0:10:0
    #SBATCH --account=PAS1234
    
    scontrol show job $SLURM_JOB_ID
    export
    
    module reset
    module load rosetta/3.12
    module list
    
    echo $TMPDIR
    cd $TMPDIR
    mkdir input_files
    
    sbcast -p $ROSETTA3/demos/tutorials/denovo_structure_prediction/Denovo_structure_prediction.md $TMPDIR/Denovo_structure_prediction.md
    sbcast -p $ROSETTA3/demos/tutorials/denovo_structure_prediction/folding_funnels.png $TMPDIR/folding_funnels.png
    
    cd $ROSETTA3/demos/tutorials/denovo_structure_prediction/input_files/
    for FILE in *
    do
            sbcast -p $FILE $TMPDIR/input_files/$FILE
    done 
    cd $TMPDIR
    AbinitioRelax.linuxiccrelease @input_files/options
    
    ls -l
    sgather -pr $TMPDIR ${SLURM_SUBMIT_DIR}/sgather
    

     

    Here is an example batch script that uses Rosetta MPI Docking script:

    #!/bin/bash
    #SBATCH --job-name="rosetta_scripts_mpi_docking_job"
    #SBATCH --nodes=2
    #SBATCH --time=0:10:0
    #SBATCH --account=PAS1234 
    
    scontrol show job $SLURM_JOB_ID
    export
     
    module reset
    module load rosetta/3.12
    module list
    
    sbcast -p ~support/share/reframe/source/rosetta/6shs_PIB.pdb $TMPDIR/6shs_PIB.pdb
    sbcast -p ~support/share/reframe/source/rosetta/pib-abeta.xml $TMPDIR/pib-abeta.xml
    sbcast -p ~support/share/reframe/source/rosetta/pib.params $TMPDIR/pib.params
    
    cd $TMPDIR
    srun rosetta_scripts.mpi -s 6shs_PIB.pdb -nstruct 100 -extra_res_fa pib.params -parser:protocol pib-abeta.xml -add_orbitals True -out:prefix t_ -out:pdb True
    
    sgather -pr $TMPDIR ${SLURM_SUBMIT_DIR}/sgather

    Further Reading

     
    Supercomputer: 
    Service: 

    SAMtools

    SAM format is a generic format for storing large nucleotide sequence alignments. SAMtools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format.

    Availability and Restrictions

    The following versions of SAMtools are available on OSC clusters:

    Version Owens Pitzer
    1.3.1 X  
    1.6 X  
    1.8   X
    1.9 X  
    1.10 X* X*
    1.16.1 X X
    * Current default version

    You can use  module spider samtools to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    SAMtools is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Genome Research Ltd., Open source

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of SAMtools, run the following command:    module load samtools  . The default version will be loaded. To select a particular SAMtools version, use    module load samtools/version  . For example, use   module load samtools/1.3.1   to load SAMtools 1.3.1.

    Usage on Pitzer

    Set-up

    To configure your environment for use of SAMtools, run the following command:    module load samtools  . The default version will be loaded. 

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    SIESTA

    SIESTA is both a method and its computer program implementation, to perform efficient electronic structure calculations and ab initio molecular dynamics simulations of molecules and solids. More information can be found from here.

    Availability and Restrictions

    Versions

    SIESTA is available on the Owens and Oakley clusters. A serial and a parallel build were created in order to meet users' computational needs.

    Version Owens Pitzer
    4.0 X  
    4.0.2 X* X*
    * Current default version

    You can use module spider siesta to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    SIESTA newer than version 4.0 is under GPL license. Therefore, any users can access SIESTA on Owens. If you have any questions, please contact OSC Help for further information.

    Publisher/Vendor/Repository and License Type

    https://departments.icmab.es/leem/siesta/, Open source

    Usage

    Batch Usage

    When you log into oakley.osc.edu or owens.osc.edu, you are actually logged into a linux box referred to as the login node. To gain access to the 4000+ processors in the computing environment, you must submit your SIESTA job to the batch system for execution.

    Assume that you have a test case in your work directory (where you submit your job, represented by $PBS_O_WORKDIR), with the input file 32_h2o.fdf. A batch script can be created and submitted for a serial or parallel run. The following are the sample batch scripts for running serial and parallel SIESTA jobs.  Sample batch scripts and input files are also available here:

    ~srb/workshops/compchem/siesta/
    

    Sample Batch Script for Serial Jobs

    #!/bin/bash
    #SBATCH --time=0:30:00
    #SBATCH --nodes=1 --ntasks-per-node=28
    #SBATCH --job-name=siesta
    #SBATCH --account <project-account>
    #
    # Set up the package environment
    module load siesta
    #
    # Execute the serial solver (nodes=1, ppn<=12)
    siesta <32_h2o.fdf> output
    exit
    NOTE: Change ntasks-per-node to = 28 for Owens

    Sample Batch Script for Parallel Jobs

    #!/bin/bash
    #SBATCH --time=0:30:00
    #SBATCH --nodes=2 --ntasks-per-node=28
    #SBATCH --job-name=siesta
    #SBATCH --account <project-account> 
    #
    # Set up the package environment
    module swap intel/12.1.4.319 intel/13.1.3.192
    module load siesta_par
    #
    # Execute the parallel solver (nodes>1, ppn=28)
    srun siesta <32_h2o.fdf> output
    exit
    NOTE: Change ntasks-per-node to = 28 for Owens

    Usage on Pitzer

    #!/bin/bash
    #SBATCH --time=0:30:00 
    #SBATCH --nodes=1 --ntasks-per-node=48
    #SBATCH --job-name=siesta
    #SBATCH --account <project-account> 
    #
    # Set up the package environment
    module load siesta
    #
    # Execute the serial solver (nodes=1, ppn<=48)
    siesta <32_h2o.fdf> output
    exit

    Further Reading

    Online documentation is available at the SIESTA homepage.

    Citations

    This is required for the versions older than 4.0.

    1. “Self-consistent order-N density-functional calculations for very large systems”, P. Ordejón, E. Artacho and J. M. Soler, Phys. Rev. B (Rapid Comm.) 53, R10441-10443 (1996).

    2. “The SIESTA method for ab initio order-N materials simulation”, J. M. Soler, E. Artacho,J. D. Gale, A. García, J. Junquera, P. Ordejón, and D. Sánchez-Portal, J. Phys.: Condens. Matt. 14, 2745-2779 (2002).

    Supercomputer: 
    Service: 

    SRA Toolkit

    The Sequence Read Archive (SRA Toolkit) stores raw sequence data from "next-generation" sequencing technologies including 454, IonTorrent, Illumina, SOLiD, Helicos and Complete Genomics. In addition to raw sequence data, SRA now stores alignment information in the form of read placements on a reference sequence. Use SRA Toolkit tools to directly operate on SRA runs.

    Availability and Restrictions

    The following versions of SRA Toolkit are available on OSC clusters:

    Version Owens Pitzer Note
    2.6.3 X   These versions no longer support  downloading SRA data** but still can be used to process local data.
    2.9.0 X  
    2.9.1   X
    2.9.6 X* X*
    2.10.7 X X  
    2.11.2 X X  
    3.0.2 X X  
    * Current default version
    ** NCBI now uses cloud-style object stores. To access SRA cloud data, use version 2.10 or later and provide your AWS or GCP access credentials (recommended) to vdb-config. For more information, see https://github.com/ncbi/sra-tools/wiki/04.-Cloud-Credentials.

    You can use  module spider sratoolkit to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    SRA Toolkit is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    National Center for Biotechnology Information, Freeware

    Usage

    Usage on Pitzer and Owens

    Set-up

    To configure your environment for use of SRA Toolkit, run the following command: module load sratoolkit. The default version will be loaded. To select a particular SRA Toolkit version, use module load sratoolkit/version. For example, use module load sratoolkit/2.11.2 to load SRA Toolkit 2.11.2

    Download SRA Data

    NCBI now uses cloud-style object stores. To access SRA cloud data, use version 2.10 or later and provide your AWS or GCP access credentials (recommended) to vdb-config. For more information, see https://github.com/ncbi/sra-tools/wiki/04.-Cloud-Credentials.

    Set up the credentials (recommended)

    Once you have obtained an AWS or GCP credential file, you can set the credentials by following these steps:

    module load sratoolkit/2.11.2
    vdb-config --report-cloud-identity yes 
    
    # For GCP credentials
    vdb-config --set-gcp-credentials /path/to/gcp/creddential/file
    
    # For AWS credentials
    vdb-config --set-aws-credentials /path/to/aws/creddential/file
    
    Each version of the toolkit comes with its own set of configuration options. To modify the defaults, run vdb-config -i to access the interactive configuration. For additional information, please visit the following link: https://github.com/ncbi/sra-tools/wiki/03.-Quick-Toolkit-Configuration.

    You can now download SRA data using prefetch 

    prefetch SRR390728
    

    The default download path is located in your home directory at ~/ncbi. For instance, if you're looking for the SRA file SRR390728.sra, you can find it at ~/ncbi/sra, and the resource files can be found at ~/ncbi/refseq. You can use srapath to verify if the SRA accession is accessible in the download path

    $ srapath SRR390728
    /users/PAS1234/johndoe/ncbi/sra/sra/SRR390728.sra
    

    You can now run other SRA tools, such as fastq-dump, on computing nodes. Here is an example job script:

    #!/bin/bash
    #SBATCH --job-name use_fastq_dump
    #SBATCH --time=0:10:0
    #SBATCH --ntask=1
    
    module load sratoolkit/2.11.2
    module list
    fastq-dump -X 5 -Z SRR390728

    Unfortunately, Home Directory file system is not optimized for handling heavy computations. If the SRA file is particularly large, you can change the default download path for SRA data to our scratch file system using one of the following two approaches. The following approaches use the /fs/scratch/PAS1234/johndoe/ncbi directory as an example.

    Change the prefetch directory using vdb-config

    module load sratoolkit/2.11.2
    vdb-config -s /repository/user/main/public/root=/fs/scratch/PAS1234/johndoe/ncbi
    prefetch SRR390728
    srapath SRR390728
    

    You should find the SRR390728 accession at /fs/scratch/PAS1234/johndoe/ncbi/sra/SRR390728.sra

    Download to the current directory (available for version 2.10 or later)

    module load sratoolkit/2.11.2
    vdb-config --prefetch-to-cwd
    mkdir -p /fs/scratch/PAS1234/johndoe/ncbi
    cd /fs/scratch/PAS1234/johndoe/ncbi
    prefetch SRR390728
    srapath SRR390728
    

    You should find the SRR390728 accession at /fs/scratch/PAS1234/johndoe/ncbi/SRR390728/SRR390728.sra

    Known Issues

    Error when downloading SRA data

    NCBI now utilizes cloud-style object stores. To access SRA cloud data, please use version 2.10 or later and provide your AWS or GCP access credentials to vdb-config. For more information, please visit https://github.com/ncbi/sra-tools/wiki/04.-Cloud-Credentials. However, you can continue to use older versions to process SRA local data.

     

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    STAR

    STAR: Spliced Transcripts Alignment to a Reference.

    Availability and Restrictions

    Versions

    The following versions of STAR are available on OSC clusters:

    Version Owens Pitzer
    2.5.2a X* X*
    2.6.0a X  
    2.7.9a X X
    * Current default version

    You can use module spider star to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    STAR is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Alexander Dobin, Open source

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of STAR, run the following command: module load star. The default version will be loaded. To select a particular STAR version, use module load star/version. For example, use module load star/2.5.2a to load STAR 2.5.2a.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    STAR-CCM+

    STAR-CCM+ provides the world’s most comprehensive engineering physics simulation inside a single integrated package. Much more than a CFD code, STAR‑CCM+ provides an engineering process for solving problems involving flow (of fluids and solids), heat transfer and stress. STAR‑CCM+ is unrivalled in its ability to tackle problems involving multi‑physics and complex geometries.  Support is provided by CD-adapco. CD-adapco usually releases new version of STAR-CCM+ every four months.

    Availability and Restrictions

    Versions

    STAR-CCM+ is available on the Owens Cluster. The versions currently available at OSC are:

    Version Owens
    11.02.010 X
    11.06.011 X
    12.04.010 X
    12.06.010 X
    13.02.011 X
    13.04.011 X
    14.02.010 X
    14.04.013 X
    15.02.007 X*
    15.06.008 X
    16.02.008 X
    17.02.007 X
    18.02.010 X
    18.04.008 X
    18.06.006 X
    * Current default version

    We have STAR-CCM+ Academic Pack, which includes STAR-CCM+, STAR-innovate, CAD Exchange, STAR-NX, STAR-CAT5, STAR-Inventor, STAR-ProE, JTOpen Reader, EHP, Admixturs, Vsim, CAT, STAR-ICE, Battery Design Studio, Sattery Simulation Module, SPEED, SPEED/Enabling PC-FEA, SPEED/Optimate, DARS, STAR-CD, STAR-CD/Reactive Flow Models, STAR-CD/Motion, esiece, and pro-STAR.

    You can use module spider starccm  to view available modules for a given machine. The default versions are in double precision. Please check with module spider starccm  to see if there is a mixed precision version available. Feel free to contact OSC Help if you need other versions for your work.

    Access for Academic Users

    Academic users can use STAR-CCM+ on OSC machines if the user or user's institution has proper STAR-CCM+ license. Currently, users from Ohio State University, University of Cincinnati, University of Akron, and University of Toledo can access the OSC's license.

    Use of STAR-CCM+ for academic purposes requires validation. In order to obtain validation, please contact OSC Help for further instruction. 

    Currently, OSC has a 80 seat license (ccmpsuite, which allows up to 80 concurrent users), with 4,000 HPC licenses (DOEtoken) for academic users. 

    Access for Commercial Users

    Contact OSC Help for getting access to STAR-CCM+ if you are a commercial user.

    Publisher/Vendor/Repository and License Type

    Siemens, Commercial

    Usage

    Usage on Owens

    Set-up on Owens

    We recommend to run STAR-CCM+ on only the compute nodes. Thus, all STAR-CCM+ jobs should be submitted via the batch scheduling system, either as interactive or non-interactive batch jobs. To load the default version of STAR-CCM+ module on Owens, use  module load starccm . To select a particular software version, use   module load starccm/version . For example, use  module load starccm/11.02.010  to load STAR-CCM+ version 11.02.010 on Owens.

    Batch Usage on Owens

    When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your STAR-CCM+ analysis to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.  Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used. STAR-CCM+ can be run on OSC clusters in either interactive mode or in non-interactive batch mode.

    Interactive Batch Session

    Interactive mode is similar to running STAR-CCM+ on a desktop machine in that the graphical user interface (GUI) will be sent from OSC and displayed on the local machine. To run interactive STAR-CCM+, it is suggested to request necessary compute resources from the login node, with X11 forwarding. The intention is that users can run STAR-CCM+ interactively for the purpose of building their model, preparing input file (.sim file), and checking results. Once developed this input file can then be run in no-interactive batch mode. For example, the following line requests one node with 28 cores( -N 1 -n 28 ), for a walltime of one hour ( -t 1:00:00 ), with one STAR-CCM+ base license token ( -L starccm@osc:1 ) on Owens:

    sinteractive -N 1 -n 28 -t 1:00:00 -L starccm@osc:1

    This job will queue until resources become available. Once the job is started, you're automatically logged in on the compute node; and you can launch STAR-CCM+ GUI with the following commands:

    module load starccm
    starccm+ -mesa
    
    Non-interactive Batch Job (Serial Run using 1 Base Token)

    batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice.

    Below is the example batch script ( job.txt ) for a serial run with an input file ( starccm.sim ) on Owens:

    #!/bin/bash
    #SBATCH --job-name=starccm_test
    #SBATCH --time=1:00:00
    #SBATCH --nodes=1 --ntasks-per-node=1
    #SBATCH -L starccm@osc:1
    
    cd $TMPDIR  
    cp $SLURM_SUBMIT_DIR/starccm.sim .  
    module load starccm  
    starccm+ -batch starccm.sim >&output.txt  
    cp output.txt $SLURM_SUBMIT_DIR
    

    To run this job on OSC batch system, the above script is to be submitted with the command:

    sbatch job.txt
    
    Non-interactive Batch Job (Parallel Run using HPC Tokens)

    To take advantage of the powerful compute resources at OSC, you may choose to run distributed STAR-CCM+ for large problems. Multiple nodes and cores can be requested to accelerate the solution time. The following shows an example script if you need 2 nodes with 28 cores per node on Owens using the inputfile named   starccm.sim   :

    #!/bin/bash
    #SBATCH --job-name=starccm_test
    #SBATCH --time=3:00:00
    #SBATCH --nodes=2 --ntasks-per-node=28
    #SBATCH -L starccm@osc:1,starccmpar@osc:55
    
    cp starccm.sim $TMPDIR
    cd $TMPDIR
    module load starccm 
    
    srun hostname | sort -n > ${SLURM_JOB_ID}.nodelist
    
    starccm+ -np 56 -batch -machinefile ${SLURM_JOB_ID}.nodelist -mpi openmpi starccm.sim >&output.txt 
    cp output.txt $SLURM_SUBMIT_DIR
    

    In addition to requesting the STAR-CCM+ base license token ( -L starccm@osc:1 ), you need to request copies of the  starccmpar  license, i.e., HPC tokens ( -L starccm@osc:1,starccmpar@osc:[n] ), where [n] is equal to the number of cores minus 1.

    We recommand using openmpi for your parallel jobs. Especially, 17.02.007 version would not work with intelmpi

    Known Issues

    Update: 03/21/2022 
    Version: 15.02.007, 15.02.007-mixed

    STAR-CCM+ 15.02.007 and 15.02.007-mixed with intelMPI would fail on multiple node jobs after the downtime on Mar 22, 2022. Please use openmpi instead. 

    starccm+ -np $SLURM_NTASKS -batch -machinefile ${SLURM_JOB_ID}.nodelist -mpi openmpi {your-input-file}
    
    Update: 05/24/2022 
    Version: 17.02.007 
    Issue: large parallel jobs fails randomly

    Large parallel jobs with STAR-CCM+ 17.02.007 may fail with openmpi provided by Starccm+ installations. Please call openmpi installed by OSC instead as:  

    ...
    module load starccm/17.02.007
    module load openmpi/4.0.3-hpcx
    export OPENMPI_DIR=/usr/local/openmpi/intel/19.0/4.0.3-hpcx
    srun hostname | sort -n > ${SLURM_JOB_ID}.nodelist
    ...
    starccm+ -np $SLURM_NTASKS -batch -machinefile ${SLURM_JOB_ID}.nodelist -mpi openmpi {your-input-file}...
    ...
    
    Update: 04/24/2023 
    Version: 18.02.010 
    Issue: IntelMPI not supported on glibc 2.17. Use OpenMPI instead.

    See Also

    Supercomputer: 
    Service: 

    Run STAR-CCM+ to STAR-CCM+ Coupling

    This documentation is to discuss how to run STAR-CCM+ to STAR-CCM+ Coupling simulation in batch job at OSC. The following example demonstrates the process of using STAR-CCM+ version 11.02.010 on Owens. Depending on the version of STAR-CCM+ and cluster you work on, there mighe be some differences from the example. Feel free to contact OSC Help if you have any questions. 

    Prepare Lagging Simulation

    • Launch the STAR-CCM+ GUI following the instructions on this page
    • Load the simulation that lags and prepare the lagging simulation following the STAR-CCM+ User Guide
      • Active a co-simulation model
      • Set "Concurrency mode -> Method" to Lag
      • Other setups
    • Save the lagging simulation and name it for example as lag.sim 

    Prepare Leading Simulation

    • Load the simulation that leads and prepare the leading simulation following the STAR-CCM+ User Guide
      • Active a co-simulation model
      • Set "Concurrency mode -> Method" to Lead
    • Go to the "Connect Method" node by selecting "Co-Simulations -> <name of co-simulation> -> Conditions". Click "Edit" of "Connect Method". In "Connect Method" node, select "Launch Application and Connect" under method. Under "Launch Application and Connect", put the following information as "Launch Command":

     /usr/local/starccm/11.02.010/STAR-CCM+11.02.010-R8/star/bin/starccm+ -load -server -rsh /usr/local/bin/pbsrsh lag.sim

             See the picture below:

    connect method

    • Save the leading simulation and name it for example as lead.sim

    Prepare Job Script

    In the job script, use the following command to run the co-simulation:

    starccm+ -np N,M -rsh /usr/local/bin/pbsrsh -batch -machinefile $PBS_NODEFILE lead.sim 

    where N is # of cores for the leading simulation and M is # of cores for the lagging simulation, and the summation of N and M should be the total number of cores you request in the job.

    Once the job is completed, the output results of the leading simulation will be returned, while the lagging simulation runs on the background server and the final results won't be saved. 

     

     

    Supercomputer: 
    Service: 

    STAR-Fusion

    STAR-Fusion is a component of the Trinity Cancer Transcriptome Analysis Toolkit (CTAT). STAR-Fusion uses the STAR aligner to identify candidate fusion transcripts supported by Illumina reads. STAR-Fusion further processes the output generated by the STAR aligner to map junction reads and spanning reads to a reference annotation set.

    Availability and Restrictions

    Versions

    The following versions of STAR-Fusion are available on OSC clusters:

    Version Owens
    0.7.0 X*
    1.4.0 X
    * Current default version

    You can use module spider star-fusion to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    STAR-Fusion is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Broad Institute, Open source

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of STAR-Fusion, run the following command:  module load star-fusion. The default version will be loaded. To select a particular STAR-Fusion version, use module load star-fusion/version. For example, use module load star-fusion/0.7.0 to load STAR-Fusion 0.7.0.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    Salmon

    Salmon is a tool for quantifying the expression of transcripts using RNA-seq data.

    Availability and Restrictions

    Versions

    Salmon is available on the Owens Cluster. The versions currently available at OSC are:

    Version Owens Pitzer
    0.8.2 X*  
    1.0.0 X  
    1.2.1 X X*
    1.4.0   X
    1.10.0   X
    * Current default version

    You can use module spider salmonto view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Salmon is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Patro, R. et al., Freeware

    Usage

    Usage on Owens

    Set-up

    To configure your enviorment for use of Salmon, use command module load salmon. This will load the default version.

    Further Reading

    Supercomputer: 

    ScaLAPACK

    ScaLAPACK is a library of high-performance linear algebra routines for clusters supporting MPI. It contains routines for solving systems of linear equations, least squares problems, and eigenvalue problems.

    This page documents usage of the ScaLAPACK library installed by OSC from source. An optimized implementation of ScaLAPACK is included in MKL; see the software documentation page for Intel Math Kernel Library for usage information.

    Availability and Restrictions

    Versions

    The versions currently available at OSC are:

    Version Owens Pitzer Ascend
    2.0.2 X X  
    2.1.0 X* X*  
    2.2.0     X*
    * Current default version

    You can use module spider scalapack to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    ScaLAPACK is available to all OSC users. If you need high performance, we recommend using MKL instead of the standalone ScaLAPACK module. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Univ. of Tennessee; Univ. of California, Berkeley; Univ. of Colorado Denver; and NAG Ltd./ Open source

    Usage

    Usage on Owens

    Set-up

    Initalizing the system for use of the ScaLAPACK libraries is dependent on the system you are using and the compiler you are using. To use the ScaLAPACK libraries in your compilation, run the following command: module load scalapack. To load a particular version, use module load scalapack/version. For example, use  module load scalapack/2.0.2 to load ScaLAPACK version 2.0.2. You can use module spider scalapack to view available modules.

    Building with ScaLAPACK

    Once loaded, the ScaLAPACK libraries can be linked in with your compilation. To do this, use the following environment variables.  You must also link with MKL.  With the Intel compiler, just add -mkl to the end of the link line.  With other compilers, load the mkl module and add $MKL_LIBS to the end of the link line.

    Variable Use
    $SCALAPACK_LIBS Used to link ScaLAPACK into either Fortran or C

    Usage on Pitzer

    Set-up

    Initalizing the system for use of the ScaLAPACK libraries is dependent on the system you are using and the compiler you are using. To use the ScaLAPACK libraries in your compilation, run the following command: module load scalapack. To load a particular version, use module load scalapack/version. For example, use  module load scalapack/2.0.2 to load ScaLAPACK version 2.0.2. You can use module spider scalapack to view available modules.

    Building with ScaLAPACK

    Once loaded, the ScaLAPACK libraries can be linked in with your compilation. To do this, use the following environment variables.  You must also link with MKL.  With the Intel compiler, just add -mkl to the end of the link line.  With other compilers, load the mkl module and add $MKL_LIBS to the end of the link line.

    VARIABLE USE
    $SCALAPACK_LIBS Used to link ScaLAPACK into either Fortran or C

    Usage on Ascend

    Set-up

    Initalizing the system for use of the ScaLAPACK libraries is dependent on the system you are using and the compiler you are using. To use the ScaLAPACK libraries in your compilation, run the following command: module load scalapack. To load a particular version, use module load scalapack/version. For example, use  module load scalapack/2.2.0 to load ScaLAPACK version 2.2.0. You can use module spider scalapack to view available modules.

    Building with ScaLAPACK

    Once loaded, the ScaLAPACK libraries can be linked in with your compilation. To do this, use the following environment variables.  You must also link with MKL.  With the Intel compiler, just add -mkl to the end of the link line.  With other compilers, load the mkl module and add $MKL_LIBS to the end of the link line.

    VARIABLE USE
    $SCALAPACK_LIBS Used to link ScaLAPACK into either Fortran or C

    Further Reading

    Supercomputer: 
    Service: 

    Schrodinger

    The Schrodinger molecular modeling software suite includes a number of popular programs focused on drug design and materials science but of general applicability, for example Glide, Jaguar, and MacroModel.  Maestro is the graphical user interface for the suite.  It allows the user to construct and graphically manipulate both simple and complex chemical structures, to apply molecular mechanics and dynamics techniques to evaluate the energies and geometries of molecules in vacuo or in solution, and to display and examine graphically the results of the modeling calculations.

    Availability and Restrictions

    Versions

    The Schrodinger suite is available on Owens. The versions currently available at OSC are:

    Version Owens
    15 X
    16 X
    2018.3 X
    2019.3 X
    2020.1 X*
    * Current default version

    You can use  module spider schrodinger to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Schrodinger is available to all academic users.  

    To use Schrodinger you will have to be added to the license server first.  Please contact OSC Help to be added. Please note that if you are a non-OSU user, we need to send your name, contact email, and affiliation information to Schrodinger in order to grant access. Currently, we have license for following features:

    CANVAS_ELEMENTS
    CANVAS_MAIN
    CANVAS_SHARED
    COMBIGLIDE_MAIN
    EPIK_MAIN
    FFLD_OPLS_MAIN
    GLIDE_MAIN
    GLIDE_XP_DESC
    IMPACT_MAIN
    KNIME_MAIN
    LIGPREP_MAIN
    MAESTRO_MAIN
    MMLIBS
    MMOD_CONFGEN
    MMOD_MACROMODEL
    MMOD_MBAE
    QIKPROP_MAIN
    

    You need to use one of following software flags in order to use the particular feature of the software without license errors.

    macromodel, glide, ligprep, qikprop, epik
    

    You can add -L glide@osc:1 to your job script if you use GLIDE for example. When you use this software flag, then your job won't start until it secures available licenses. Please read the batch script examples below.  You can check your license usage via the license usage checking tool.

    Publisher/Vendor/Repository and License Type

    Schrodinger, LLC/ Commercial

    Usage

    Usage on Owens

    To set up your environment for schrodinger load one of its modulefiles:

    module load schrodinger/2019.3
    

    Using schrodinger interactively requires an X11 connection. Typically one will launch the graphical user interface maestro.  This can be done with either software rendering:

    maestro -SGL
    

    or with hardware rendering:

    module load vglrun
    vglrun maestro
    

    Note that hardware rendering requires a node with a GPU as well as the additional vglrun syntax above.  In principle hardware rendering is superior; however, in practice it can be laggier, and thus software rendering can yield a better experience.

    Here is an example batch script that uses schrodinger non-interactively via the batch system:

    #!/bin/bash
    # Example glide single node batch script.
    #SBATCH --job-name=glidebatch
    #SBATCH --time=1:00:00
    #SBATCH --nodes=1 --ntasks-per-node=28
    #SBATCH -L glide@osc:1
    
    module load schrodinger
    cp * $TMPDIR
    cd $TMPDIR
    host=`srun hostname|head -1`
    nproc=`srun hostname|wc -l`
    glide -WAIT -HOST ${host}:${nproc} -NJOBS 40 receptor_glide.in
    ls -l
    cp * $SLURM_SUBMIT_DIR
    

    The glide command passes control to the Schrodinger Job Control utility which processes the two options: The WAIT option forces the glide command to wait until all tasks of the command are completed. This is necessary for the batch jobs to run effectively. The HOST option specifies how tasks are distributed over processors.  In addition, the glide option NJOBS distributes the job into subjobs which can number more than the licenses or processors specified in the batch directives.

    Determining the optimal amount of resources will probably require benchmarking.  See the Schrodinger Knowledge Base for advice, e.g., running glide in parallel and docking a large database.  Note also that OSC imposes a usage limit of 16 concurrent glide licenses per group. So while --ntasks-per-node=28 requests a whole Owens node, which may have significant performance benefits even if all processors are not used, it is not possible to have that many glide licenses.

    Further Reading

    Supercomputer: 
    Service: 

    Scipion

    SCIPION is an image processing framework fo robtaining 3D models of macromolecular complexes using Electron Microscopy (3DEM). It integrates several software packages and presents a unified interface for both biologists and developers. Scipion allows you to execute workflows combining different software tools, while taking care of formats and conversions. Additionally, all steps are tracked and can be reproduced later on.

    Availability and Restrictions

    Versions

    The following versions are available on OSC clusters:

    Version Pitzer
    3.0.8 X*
    * Current default version

    You can use module spider scipion to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Scipion is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    All scipion code and plugins, are licensed under the GPL3 (http://www.gnu.org/licenses/gpl-3.0.html)

    Now, Scipion interacts, and in some cases installs 3rd party software with its own LICENCE that must be observed.

    So, it is under the user responsibility to check the license of each of the software scipion is installing.

    In most cases, if not all, software is free available for academic use and industry but there are few exceptions where industry users are not granted for a free usage. You must check each case.

    Usage

    Usage on Pitzer

    Set-up

    To configure your environment for use of scipion, run the following command: module load scipion. The default version will be loaded. To select a particular scipion version, use module load scipion/version. For example, use module load scipion/3.0.8 to load SCIPION 3.0.8

    Scipion/3.0.8 was built with gnu/9.1.0, openmpi/4.0.3-hpcx, cuda/10.1.168, and hdf5/1.12.0.

    Plugins

    The following plugins are installed

    scipion-em-xmipp
    scipion-em-resmap
    scipion-em-sphire
    scipion-em-localrec
    scipion-em-bsoft
    scipion-em-ccp4
    scipion-em-cryoef
    scipion-em-spider
    scipion-em-imagic
    

     

    Further Reading

    Supercomputer: 
    Service: 

    SnpEff

    SnpEff is a variant annotation and effect prediction tool. It annotates and predicts the effects of variants on genes (such as amino acid changes).

    Availability and Restrictions

    Versions

    The following versions of SnpEff are available on OSC clusters:

    Version Owens
    4.2 X*
    * Current default version

    You can use  module spider snpeff to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    SnpEff is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    http://snpeff.sourceforge.net, Open source

    Usage 

    Usage on Owens

    Set-up

    To configure your environment for use of SnpEff, run the following command: module load snpeff. The default version will be loaded. To select a particular SnpEff version, use module load snpeff/version. For example, use module load snpeff/4.2 to load SnpEff 4.2.

    Usage

    This software consists of Java executable .jar files; thus, it is not possible to add to the PATH environment variable.

    From module load snpeff, new environment variables, SNPEFF and SNPSIFT, will be set. Thus, users can use the software by running the following command: java -jar $SNPEFF {other options}, or java -jar $SNPSIFT {other options}.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    Spark

    Apache Spark is an open source cluster-computing framework originally developed in the AMPLab at University of California, Berkeley but was later donated to the Apache Software Foundation where it remains today. In contrast to Hadoop's disk-based analytics paradigm, Spark has multi-stage in-memory analytics. Spark can run programs up-to 100x faster than Hadoop’s MapReduce in memory or 10x faster on disk. Spark support applications written in python, java, scala and R

    Availability and Restrictions

    Versions

    The following versions of Spark are available on OSC systems: 

    Version Owens Pitzer Note
    2.0.0 X*   Only support Python 3.5
    2.1.0 X   Only support Python 3.5
    2.3.0 X    
    2.4.0 X X*  
    2.4.5 X X  
    * Current default version

    You can use module spider spark to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Spark is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    The Apache Software Foundation, Open source

    Usage

    Set-up

    In order to configure your environment for the usage of Spark, run the following command:

    module load spark

    A particular version of Spark can be loaded as follows

    module load spark/2.3.0
    

    Using Spark

     In order to run Spark in batch, reference the example batch script below. This script requests 6 node on the Owens cluster for 1 hour of walltime. The script will submit the pyspark script called test.py using pbs-spark-submit command. 

    #!/bin/bash 
    #SBATCH --job-name ExampleJob 
    #SBATCH --nodes=2 --ntasks-per-node=48 
    #SBATCH --time=01:00:00 
    #SBTACH --account your_project_id
    
    module load spark
    
    cp test.py $TMPDIR
    cd $TMPDIR 
    
    pbs-spark-submit test.py  > test.log
    
    cp * $SLURM_SUBMIT_DIR
    
    

    pbs-spark-submit script is used for submitting Spark jobs. For more options, please run,

    pbs-spark-submit --help
    

    Running Spark interactively in batch

    To run Spark interactively, but in batch on Owens please run the following command,

     sinteractive -N 2 -n 28 -t 01:00:00 

    When your interactive shell is ready, please launch spark cluster using the pbs-spark-submit script

    pbs-spark-submit

    You can then launch pyspark  by connecting to Spark master node as follows.

    pyspark --master spark://nodename.ten.osc.edu:7070

    Launching Jupyter+Spark on OSC OnDemand

    Instructions on how to launch Spark on OSC OnDemand web interface is here. https://www.osc.edu/content/launching_jupyter_spark_app  

    Custom Spark Property values

    When launching a Spark application on Ondemand, users can provide a path to a custom property file that replaces Spark's default configuration settings. This allows for greater customization and optimization of Spark's behavior based on the specific needs of the application.

    However, it's important to note that before setting the configuration using a custom property file, users should ensure that there are enough resources on the cluster to handle the requested configuration. 

    Example of custom property file:spark_custom.conf

    spark.executor.instances 2 
    spark.executor.cores 2 
    spark.executor.memory 60g 
    spark.driver.memory 2g 
    
    

    Users can check the default property values or the values after loading the custom property file as follows

    spark.sparkContext.getConf().getAll()
    

    Further Reading

    See Also

    Supercomputer: 
    Service: 
    Fields of Science: 

    Stata

    Stata is a complete, integrated statistical package that provides everything needed for data analysis, data management, and graphics. 32-processor MP version is currently available at OSC.

    Availability and Restrictions

    Versions

    The following versions of Stata are available on OSC systems:

    Version Owens
    15 X*
    17 X
    * Current default version

    You can use module spider stata to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Only academic OSC users can use the software. OSC has the license for 5 seats concurrently. Each user can use up to 32 cores. In order to access the software, please contact OSC Help to get validated.

    Publisher/Vendor/Repository and License Type

    StataCorp, LLC, Commercial

    Usage

    Set-up

    To configure your environment on Oakley for the usage of Stata, run the following command:

    module load stata

    Using Stata

    Due to licensing restrictions, Stata may ONLY be used via the batch system on Owens. See below for information on how this is done.

    Batch Usage

    OSC has a 5-user license. However, there is no enforcement mechanism built into Stata. In order for us to stay within the 5-user limit, we require you to run in the context of Slurm and to include this option when starting your batch job (the Slurm system will enforce the 5 user limit):

    #SBATCH -L stata@osc:1

    Non-Interactive batch example

    Use the script below as a template for your usage.

    #!/bin/bash
    #SBATCH -t 1:00:00
    #SBATCH --nodes=1 --ntask-per-node=28
    #SBATCH -L stata@osc:1
    #SBATCH --job-name=stata
    
    module load stata
    
    stata-mp -b do bigjob
    

     

     

    Further Reading

    See Also

    Supercomputer: 
    Service: 

    StringTie

    StringTie assembles aligned RNA-Seq reads into transcripts that represent splice variants in RNA-Seq samples.

    Availability and Restrictions

    Versions

    StringTie is available on the Owens Cluster. The versions currently available at OSC are:

    Version Owens
    1.3.3b X*
    * Current Default Version

    You can use module spider stringtieto view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    StringTie is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    https://ccb.jhu.edu/software/stringtie/, Open source

    Usage

    Usage on Owens

    Set-up

    To configure your enviorment for use of StringTie, use command module load stringtie. This will load the default version.

    Further Reading

    Supercomputer: 
    Fields of Science: 

    Subread

    The Subread package comprises a suite of software programs for processing next-gen sequencing read data like Subread, Subjunc, featureCounts, and exactSNP.

    Availability and Restrictions

    Versions

    The following versions of Subread are available on OSC clusters:

    Version Owens
    1.5.0-p2

    X*

    2.0.6 X
    * Current default version

    You can use  module spider subread to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Subread is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    http://subread.sourceforge.net, Open source

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of Subread, run the following command: module load subread. The default version will be loaded. To select a particular Subread version, use module load subread/version. For example, use module load subread/1.5.0-p2 to load Subread 1.5.0-p2.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    Subversion

    Apache Subversion is a full-featured version control system. 

    Availability and Restrictions

    Versions

    The following versions of Subversion are available on OSC systems: 

    Version Owens
    1.8.19 X*
    * Current default version

    You can use module spider subversion to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Subversion is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Apache Software Foundation, Open Source, Apache License

    Usage

    Set-up

    The default version 1.7.14 is system-built. It is ready as you login.  To use other versions, e.g 1.8.19  run the following command:

    module load subversion/1.8.19
    

    Further Reading

     

    Tag: 
    Supercomputer: 
    Service: 
    Fields of Science: 

    SuiteSparse

    SuiteSparse is a suite of sparse matrix algorithms, including: UMFPACK(multifrontal LU factorization), CHOLMOD(supernodal Cholesky, with CUDA acceleration), SPQR(multifrontal QR) and many other packages.

    Availability and Restrictions

    Versions

    OSC supports most packages in SuiteSparse, including UMFPACK, CHOLMOD, SPQR, KLU and BTF, Ordering Methods (AMD, CAMD, COLAMD, and CCOLAMD) and CSparse. SuiteSparse modules are available for the Intel, GNU, and Portland Group compilers. The following versions of SuiteSparse are available at OSC.

    Version Owens
    4.5.3 X*
    * Current default version

    You can use module spider suitesparse to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    NOTE: SuiteSparse library on our clusters is built without METIS, which might matter if CHOLMOD package is included in your program.

    Access

    SuiteSparse is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Timothy A. Davis, Patrick R. Amestoy, and Iain S. Duff./ Open source

    Usage

    Usage on Owens

    Set-up on Owens

    To use SuitSparse, ensure the correct compiler is loaded. User module spider suitesparse/version to view compatible compilers. Before loading the SuiteSparse library, MKL is also required. Load the MKL library with module load mkl. Then with the following command, SuiteSparse library is ready to be used: module load suitesparse

    Building With SuiteSparse

    With the SuiteSparse library loaded, the following environment variables will be available for use:

    Variable Use
    $SUITESPARSE_CFLAGS Include flags for C or C++ programs.
    $SUITESPARSE_LIBS Use when linking your program to SuiteSparse library.

    For example, to build the code my_prog.c with the SuiteSparse library you would use:

    icc -c my_prog.c
    icc -o my_prog my_prog.o $SUITESPARSE_LIBS
    

    Batch Usage on Owens

    When you log into oakley.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.  Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.

    Non-interactive Batch Job (Serial Run)
    A batch script can be created and submitted for a serial or parallel run. You can create the batch script using any text editor you like in a working directory on the system of your choice. You must load the SuiteSparse module in your batch script before executing a program which is built with the SuiteSparse library. Below is the example batch script that executes a program built with SuiteSparse:
    #!/bin/bash
    #SBATCH --job-name MyProgJob
    #SBATCH --nodes=1 --ntasks-per-node=28
    #SBATCH --account <project-account>
    
    module load gnu/4.8.4
    module load mkl
    module load suitesparse
    
    cp foo.dat $TMPDIR
    cd $TMPDIR
    my_prog < foo.dat > foo.out
    cp foo.out $SLURM_SUBMIT_DIR
    

    Further Reading

    See Also

    Supercomputer: 
    Service: 

    TAU Commander

    TAU Commander is a user interface for the TAU Performance System, a set of tools for analyizing the performance of parallel programs. 

    Availability and Restrictions

    Versions

    TAU Commander is available on the Owens Cluster. The versions currently available at OSC are:

    Version Owens Pitzer
    1.2.1 X  
    1.3.0 X* X*
    * Current default version

    You can use  module spider taucmdr to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    TAU Commander is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    ParaTools, Inc., Open source

    Usage

    Usage on Owens

    Set-up

    To configure your enviorment for use of TAU Commander, use command  module load taucmdr. This will load the default version.

    Creating a project

    The first step to use TAU Commander on your code is to create and configure a project. To create a project, use the command tau initalize. Additional options for compilers, MPI libraries, measurements. etc. are available.

    For instance, to configure for Intel compilers use the command tau initialize --compilers Intel and to configure for MPI use tau initialize --mpi.

    For more details about how to initialize your project use the command tau help initialize.

    After running creating the project you should see a dashboard for your project with a target, application, and 3 default measurements. You can now create additional measurements or modify the application and target. See the TAU Commander user guide for more information about how to configure your project.

    Compiling and Running your Code

    To compile your code to run with TAU Commander, just add tau before the compiler. For instance, if you are compiling with gcc now compile with tau gcc. Similarly, when you run your code add tau before the run command. So, if you usually run with srun -N 2 -n 4 ./my_prog now run with tau srun -N 2 -n 4 ./my_prog. Each time the program is run with tau prepended, a new trial is created in the project with performance data for that run.

    Post-processing

    Once you have generated performance data with TAU Commander you have 3 options to view the trial performance data:
    1. view the data in text format (not always available),
    2. view the data in a GUI using an OnDemand VDI (Virtual Desktop Interface) or X11 forwarding enabled,
    3. or export the data to your local machine (requires minimal installation of TAU commander on your local machine).
    To view the data:
    tau trial show trial_number

    To export the data:

    tau trial export trial_number

    Usage on Pitzer

    Set-up

    To configure your enviorment for use of TAU Commander, use command  module load taucmdr. This will load the default version.

    Creating a project

    The first step to use TAU Commander on your code is to create and configure a project. To create a project, use the command tau initalize. Additional options for compilers, MPI libraries, measurements. etc. are available.

    For instance, to configure for Intel compilers use the command tau initialize --compilers Intel and to configure for MPI use tau initialize --mpi.

    For more details about how to initialize your project use the command tau help initialize.

    After running creating the project you should see a dashboard for your project with a target, application, and 3 default measurements. You can now create additional measurements or modify the application and target. See the TAU Commander user guide for more information about how to configure your project.

    Compiling and Running your Code

    To compile your code to run with TAU Commander, just add tau before the compiler. For instance, if you are compiling with gcc now compile with tau gcc. Similarly, when you run your code add tau before the run command. So, if you usually run with srun -N 2 -n 4 ./my_prog now run with tau srun -N 2 -n 4 -- ./my_prog. Each time the program is run with tau prepended, a new trial is created in the project with performance data for that run. See man srun or the srun documenation for information on arguements used above.

    Post-processing

    Once you have generated performance data with TAU Commander you have 3 options to view the trial performance data:
    1. view the data in text format (not always available),
    2. view the data in a GUI using an OnDemand VDI (Virtual Desktop Interface) or X11 forwarding enabled,
    3. or export the data to your local machine (requires minimal installation of TAU commander on your local machine).
    To view the data:
    tau trial show trial_number

    To export the data:

    tau trial export trial_number

    Further Reading

    Supercomputer: 

    TensorFlow

    "TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) that flow between them. This flexible architecture lets you deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device without rewriting code."

    Quote from TensorFlow Github documentation

    Availability and Restrictions

    Versions

    The following version of TensorFlow is available on OSC clusters:

    Version Owens Pitzer Note CUDA version
    compatibility
    1.3.0 X   python/3.6 8 or later
    1.9.0 X* X* python/3.6-conda5.2 9 or later
    2.0.0 X X python/3.7-2019.10 10.0 or later
     

    TensorFlow is a Python package and therefore requires loading corresonding python modules (see Note). The version of TensorFlow may actively change with updates to Anaconda Python on Owens so that you can check the latest version with conda list tensorflow. The available versions of TensorFlow on Owens and Pitzer require CUDA for GPU calculations. You can find and load compatible cuda module via

    module load python/3.6-conda5.2
    module spider cuda
    module load cuda/9.2.88
    

    If you would like to use a different version of TensorFlow, please follow this installation guide which describes how to install python packages locally. 

    https://www.osc.edu/resources/getting_started/howto/howto_install_tensorflow_locally

    Newer version of TensorFlow might require newer version of CUDA. Please refer to https://www.tensorflow.org/install/source#gpu for a up-to-date compatibility chart.

    Feel free to contact OSC Help if you have any issues with installation.

    Access 

    TensorFlow is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    https://www.tensorflow.org, Open source

    Usage on Owens

    Usage on Owens

    Setup on Owens

    TensorFlow package is installed using Anaconda Python 2.  To configure the Owens cluster for the use of TensorFlow, use the following commands:

    module load python/3.6 cuda/8.0.44
    

    Batch Usage on Ruby or Owens

    Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations for Owens, and Scheduling Policies and Limits for more info.  In particular, TensorFlow should be run on a GPU-enabled compute node.

    An Example of Using  TensorFlow with MNIST model and Logistic Regression

    Below is an example batch script (job.txt and logistic_regression_on_mnist.py) for using TensorFlow.

    Contents of job.txt

    #!/bin/bash
    #SBATCH --job-name ExampleJob
    #SBATCH --nodes=2 --ntasks-per-node=28 --gpus-per-node=1
    #SBATCH --time=01:00:00
     
    
    cd $PBS_O_WORKDIR
    
    module load python/3.6 cuda/8.0.44
    python logistic_regression_on_mnist.py

    Contents of logistic_regression_on_mnist.py

    # logistic_regression_on_mnist.py Python script based on:
    # https://github.com/aymericdamien/TensorFlow-Examples/blob/master/notebooks/0_Prerequisite/mnist_dataset_intro.ipynb
    # https://github.com/aymericdamien/TensorFlow-Examples/blob/master/notebooks/2_BasicModels/logistic_regression.ipynb
    
    import tensorflow as tf
    
    # Import MNIST
    from tensorflow.examples.tutorials.mnist import input_data
    mnist = input_data.read_data_sets("data/", one_hot=True)
    
    # Parameters
    learning_rate = 0.01
    training_epochs = 25
    batch_size = 100
    display_step = 1
    
    # tf Graph Input
    x = tf.placeholder(tf.float32, [None, 784]) # mnist data image of shape 28*28=784
    y = tf.placeholder(tf.float32, [None, 10]) # 0-9 digits recognition => 10 classes
    
    # Set model weights
    W = tf.Variable(tf.zeros([784, 10]))
    b = tf.Variable(tf.zeros([10]))
    
    # Construct model
    pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax
    
    # Minimize error using cross entropy
    cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))
    # Gradient Descent
    optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
    
    # Initializing the variables
    init = tf.global_variables_initializer()
    # Launch the graph
    with tf.Session() as sess:
        sess.run(init)
    
        # Training cycle
        for epoch in range(training_epochs):
            avg_cost = 0.
            total_batch = int(mnist.train.num_examples/batch_size)
            # Loop over all batches
            for i in range(total_batch):
                batch_xs, batch_ys = mnist.train.next_batch(batch_size)
                # Fit training using batch data
                _, c = sess.run([optimizer, cost], feed_dict={x: batch_xs,
                                                              y: batch_ys})
                # Compute average loss
                avg_cost += c / total_batch
            # Display logs per epoch step
            if (epoch+1) % display_step == 0:
                print ("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost))
    
        print ("Optimization Finished!")
    
        # Test model
        correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
        # Calculate accuracy for 3000 examples
        accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
        print ("Accuracy:", accuracy.eval({x: mnist.test.images[:3000], y: mnist.test.labels[:3000]}))

    In order to run it via the batch system, submit the job.txt  file with the following command:

    sbatch job.txt
    

     

    Distributed Tensorflow

    Tensorflow can be configured to run parallel using Horovod package from uber.

    Further Reading

    TensorFlow homepage

     
    Supercomputer: 
    Service: 
    Technologies: 
    Fields of Science: 

    Texlive

    TeX Live is a straightforward way to get up and running with the TeX document production system. It provides a comprehensive TeX system with binaries for most flavors of Unix, including GNU/Linux, macOS, and also Windows. It includes all the major TeX-related programs, macro pacakges, and fonts that are free software, including support for many languages around the world.

    Availability and Restrictions

    Versions

    The following versions are available on OSC clusters:

    Version Owens Pitzer
    2018 X* X*
    2021 X X
    * Current default version

    You can use module spider texlive to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Texlive is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Per the TeX Live licensing, copying, and redistribution webpage, all the material in TeX Live may be freely used, copied, modified, and/or redistributed, subject to (in many cases) the sources remaining freely available.

    Please visit this link for full licensing/copyright information.

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of mriqc, run the following command: module load texlive. The default version will be loaded. To select a particular Texlive version, use module load texlive/version. For example, use module load texlive/2021 to load Texlive 2021.

    Usage on Pitzer

    Set-up

    To configure your environment for use of mriqc, run the following command: module load texlive. The default version will be loaded. To select a particular Texlive version, use module load texlive/version. For example, use module load texlive/2021 to load Texlive 2021.

    Further Reading

    Tag: 
    Supercomputer: 
    Service: 
    Technologies: 
    Fields of Science: 

    Tinker

    Tinker is a molecular modeling package. Tinker provides a general set of tools for molecular mechanics and molecular dynamics.

    Availability and Restrictions

    Versions

    Tinker is currently available on Owens and Pitzer. The versions currently available at OSC are:

    Version Owens Pitzer
    8.10.5 X* X*
    * Current default version

    You can use module spider tinker to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Tinker is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Tinker Core Development Consortium

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of Tinker, you first need to load the correct compiler. Use module spider tinker to see the compatable compilers. Then load a compatable compiler by runningmodule load compiler/version.

    Then use the command module load tinker. This will load the default version of Tinker. To select a particular version, use module load tinker/version .

     

    For example, execute module load intel/2021.3.0 then module load tinker/8.10.5 to load Tinker version 8.10.5 on Owens.

    Usage on Pitzer

    Set-up

    To configure your environment for use of Tinker, you first need to load the correct compiler. Use module spider tinker to see the compatable compilers. Then load a compatable compiler by runningmodule load compiler/version.

    Then use the command module load tinker. This will load the default version of Tinker. To select a particular version, use module load tinker/version .

     

    For example, execute module load intel/2021.3.0 then module load tinker/8.10.5 to load Tinker version 8.10.5 on Pitzer.

     

    Further Reading

    Supercomputer: 

    TopHat

    TopHat uses Bowtie, a high-throughput short read aligner, to analyze the mapping results for RNA-Seq reads and identify splice junctions.

     

    Please note that tophat (and bowtie) cannot run in parallel, that is, on multiple nodes.  Submitting multi-node jobs will only waste resources.  In addition you must explicitly include the '-p' option to use multiple threads on a single node.

    Availability and Restrictions

    Versions

    TopHat is available on the Owens Cluster. The versions currently available at OSC are:

    Version Owens
    2.1.1 X*
    * Current default version

    You can use module spider tophat to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    TopHat is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    http://ccb.jhu.edu/software/tophat, Open source

    Usage

    Usage on Owens

    Set-up

    To configure your enviorment for use of TopHat, use command module load tophat. This will load the default version.

    Further Reading

    Supercomputer: 
    Fields of Science: 

    Torch

    "Torch is a deep learning framework with wide support for machine learning algorithms. It's open-source, simple to use, and efficient, thanks to an easy and fast scripting language, LuaJIT, and an underlying C / CUDA implementation. Torch offers popular neural network and optimization libraries that are easy to use, yet provide maximum flexibility to build complex neural network topologies. It also runs up to 70% faster on the latest NVIDIA Pascal™ GPUs, so you can now train networks in hours, instead of days."

    Quote from Torch documentation.

    Availability and Restrictions

    Versions

    The following version of Torch is available on OSC cluster:

    Version Owens
    7 X*
    * Current default version

    You can use module spider torch to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    The current version of Torch on Owens requires cuda/8.0.44 and CUDNN v5 for GPU calculations.

    Access 

    Torch is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Soumith Chintala, Ronan Collobert, Koray Kavukcuoglu, Clement Farabet/ Open source

    Usage

    Usage on Owens

    Setup on Owens

    To configure the Owens cluster for the use of Torch, use the following commands:

    module load torch
    

    Batch Usage on Ruby or Owens

    Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations for Owens, and Scheduling Policies and Limits for more info.  In particular, Torch should be run on a GPU-enabled compute node.

    An Example of Using Torch with CIFAR10 Training Data on Owens

    Below is an example batch script (job.txt) for using Torch. Please see the reference https://github.com/szagoruyko/cifar.torch for more details.

    #!/bin/bash
    #SBATCH --job-name=Torch
    #SBATCH --nodes=1 --ntasks-per-node=28 --gpus=1
    #SBATCH --time=00:30:00
    #SBATCH --account <project-account>
    
    # Load module load for torch
    module load torch
    # Migrate to job temp directory 
    cd $TMPDIR
    # Clone sample data and scripts
    git clone https://github.com/szagoruyko/cifar.torch.git .
    # Run the image preprocessing (not necessary for subsequent runs, just re-use provider.t7)
    OMP_NUM_THREADS=28 th -i provider.lua <<Input
    provider = Provider()
    provider:normalize()
    torch.save('provider.t7',provider)
    exit
    y
    Input
    # Run the torch training
    th train.lua --backend cudnn
    # Copy results from job temp directory
    cp -a * $SLURM_SUBMIT_DIR
    

    In order to run it via the batch system, submit the job.txt file with the following command:

    sbatch job.txt
    

    Further Reading

    Supercomputer: 
    Service: 
    Technologies: 
    Fields of Science: 

    Transmission3d

    Transmission3d is a 3-dimensional, multi-body gear contact analysis software capable of modeling complex gear systems developed by Ansol (Advanced Numeric Solutions). Multiple gear types, including: Helical, Straight Bevel, Spiral Bevel, Hypoids, Beveloids and Worms can be modeled. Multiple bearing types, as well as complex shafts, carriers and housings can also be modeled with the software. A variety of output data options including tooth bending stress, contact patterns, and displacement are also available.

    Availability and Restrictions

    Versions

    The following versions of Blender are available on OSC systems: 

    VERSION

    OWENS

    6724

    X*

    * Current default version

    Access

    Contact OSC Help and Ansol sales to get access to Transmission3D.

    Publisher/Vendor/Repository and License Type

    Ansol, Commercial

    Further Reading

     

    Supercomputer: 
    Service: 
    Fields of Science: 

    TrimGalore

    TrimGalore is a wrapper tool that automates quality and adapter trimming to FastQ files. It also provides functionality to RRBS sequence files.

    Availability and Restrictions

    Versions

    TrimGalore is available on the Owens cluster. The versions currently available at OSC are:

    Version Owens
    0.4.5 X*
    * Current default version

    You can use module spider trimgalore to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    TrimGalore is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    The Babraham Institute, Open source

    Usage

    Usage on Owens

    Set-up

    To configure your enviorment for use of TrimGalore, use command module load trimgalore. This will load the default version.

    Further Reading

    Supercomputer: 
    Fields of Science: 

    Trimmomatic

    Trimmomatic performs a variety of useful trimming tasks for illumina paired-end and single ended data.The selection of trimming steps and their associated parameters are supplied on the command line.

    Availability and Restrictions

    Versions

    The following versions of Trimmomatic are available on OSC clusters:

    Version Owens Pitzer
    0.36 X*  
    0.38   X*
    * Current default version

    You can use  module spider trimmomatic to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Trimmomatic is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    http://www.usadellab.org/cms/?page=trimmomatic, Open source

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of Trimmomatic, run the following command: module load trimmomatic. The default version will be loaded. To select a particular Trimmomatic version, use module load trimmomatic/version. For example, use module load trimmomatic/0.36 to load Trimmomatic 0.36.

    Usage

    This software is a Java executable .jar file; thus, it is not possible to add to the PATH environment variable.

    From module load trimmomatic, a new environment variable, TRIMMOMATIC, will be set.

    Thus, users can use the software by running the following command: java -jar $TRIMMOMATIC {other options}.

    Usage on Pitzer

    Set-up

    To configure your environment for use of Trimmomatic, run the following command: module load trimmomatic. The default version will be loaded. To select a particular Trimmomatic version, use module load trimmomatic/version. For example, use module load trimmomatic/0.38 to load Trimmomatic 0.38.

    Usage

    This software is a Java executable .jar file; thus, it is not possible to add to the PATH environment variable.

    From module load trimmomatic, a new environment variable, TRIMMOMATIC, will be set.

    Thus, users can use the software by running the following command: java -jar $TRIMMOMATIC {other options}.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    Trinity

    Trinity represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data.

    Availability and Restrictions

    The following versions of Trinity are available on OSC clusters:

    Version Owens Pitzer
    2.11.0 X X
    2.15.1   X
    * Current default version

    You can use  module spider trinityrnaseq to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Trinity is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Broad Institute and the Hebrew University of Jerusalem, Open source

    Usage

    Usage on Pitzer and Owens

    Set-up

    To configure your environment for use of Trinity, run the following command: module load trinityrnaseq. The default version will be loaded. To select a particular Trinity version, use module load trinityrnaseq/version. For example, use module load trinityrnaseq/2.11.0 to load Trinity 2.11.0.

    Further Reading

     
    Supercomputer: 
    Service: 
    Fields of Science: 

    TurboVNC

    TurboVNC is an implementation of VNC optimized for 3D graphics rendering.  Like other VNC software, TurboVNC can be used to create a virtual desktop on a remote machine, which can be useful for visualizing CPU-intensive graphics produced remotely.

    Availability and Restrictions

    Versions

    The versions currently available at OSC are:

    Version Owens Pitzer Notes
    2.0.91 X    
    2.1.1 X   Must load intel compiler, version 16.0.3 for Owens
    2.1.90 X* X*  
    * Current default version

    NOTE:

    • [1] -- TurboVNC 1.1's version of vncviewer does not work on Oakley.  Use the 1.2 module's version.
    • [2] -- TurboVNC 1.2's version of vncserver does not work on Oakley.  Use the 1.1 module's version.
    • To simplify vncviewer and vncserver incompatibility with prior 1.X versions, a new version of TurboVNC 2.0 is available as of 10/30/2015 on Oakley and Ruby clusters.
    • A version of TurboVNC 2.0.91 was installed in September 2016 to be uniformly available on all OSC clusters.

    You can use  module spider turbovnc to view available modules for a given cluster. Feel free to contact OSC Help  if you need other versions for your work.

    Access

    TurboVNC is available for use by all OSC users.

    Publisher/Vendor/Repository and License Type

    https://www.turbovnc.org, Open source

    Usage

    Usage on Owens

    Setup on Owens

    To load the default version of TurboVNC module, use module load turbovnc. To select a particular software version, use module load turbovnc/version. For example, use module load turbovnc/2.0 to load TurboVNC version 2.0 on Oakley. 

    Please do not SSH directly to compute nodes and start VNC sessions! This will negatively impact other users (even if you have been assigned a node via the batch scheduler), and we will consider repeated occurances an abuse of the resources. If you need to use VNC on a compute node, please see our HOWTO for instructions.

    Using TurboVNC

    To start a VNC server on your current host, use the following command:

    vncserver  
    

    After starting the VNC server you should see output similar to the following:  

    New 'X' desktop is hostname:display
    Starting applications specified in /nfs/nn/yourusername/.vnc/xstartup.turbovnc
    Log file is /nfs/nn/yourusername/.vnc/hotsname:display.log
    

    Make a note of the hostname and display number ("hostname:display"), because you will need this information later in order to connect to the running VNC server.  

    To establish a standard unencrypted connection to an already running VNC server, X11 forwarding must first be enabled in your SSH connection.  This can usually either be done by changing the preferences or settings in your SSH client software application, or by using the -X or -Y option on your ssh command.     

    Once you are certain that X11 forwarding is enabled, create your VNC desktop using the vncviewer command in a new shell.

    vncviewer
    

    You will be prompted by a dialogue box asking for the VNC server you wish to connect to.  Enter "hostname:display".  

    You may then be prompted for your HPC password.  Once the password has been entered your VNC desktop should appear, where you should see all of your home directory contents. 

    When you are finished with your work on the VNC desktop, you should make sure to close the desktop and kill the VNC server that was originally started.  The VNC server can be killed using the following command in the shell where the VNC server was originally started:

    vncserver -kill :[display]
    

    For a full explanation of each of the previous commands, type man vncserver or man vncviewer at the command line to view the online manual.

    Usage on Pitzer

    Setup on Pitzer

    To load the default version of TurboVNC module, use module load turbovnc

    Please do not SSH directly to compute nodes and start VNC sessions! This will negatively impact other users (even if you have been assigned a node via the batch scheduler), and we will consider repeated occurances an abuse of the resources. If you need to use VNC on a compute node, please see our HOWTO for instructions.

    Using TurboVNC

    To start a VNC server on your current host, use the following command:

    vncserver  
    

    After starting the VNC server you should see output similar to the following:  

    New 'X' desktop is hostname:display
    Starting applications specified in /nfs/nn/yourusername/.vnc/xstartup.turbovnc
    Log file is /nfs/nn/yourusername/.vnc/hotsname:display.log
    

    Make a note of the hostname and display number ("hostname:display"), because you will need this information later in order to connect to the running VNC server.  

    To establish a standard unencrypted connection to an already running VNC server, X11 forwarding must first be enabled in your SSH connection.  This can usually either be done by changing the preferences or settings in your SSH client software application, or by using the -X or -Y option on your ssh command.     

    Once you are certain that X11 forwarding is enabled, create your VNC desktop using the vncviewer command in a new shell.

    vncviewer
    

    You will be prompted by a dialogue box asking for the VNC server you wish to connect to.  Enter "hostname:display".  

    You may then be prompted for your HPC password.  Once the password has been entered your VNC desktop should appear, where you should see all of your home directory contents. 

    When you are finished with your work on the VNC desktop, you should make sure to close the desktop and kill the VNC server that was originally started.  The VNC server can be killed using the following command in the shell where the VNC server was originally started:

    vncserver -kill :[display]
    

    For a full explanation of each of the previous commands, type man vncserver or man vncviewer at the command line to view the online manual.

    Further Reading

    Additional information about TurboVNC can be found at the VirtualGL Project's documentation page.  

    See Also

    Supercomputer: 
    Service: 
    Fields of Science: 

    Turbomole

    TURBOMOLE is an ab initio computational chemistry program that implements various quantum chemistry algorithms. It is focused on efficiency, notably using the resolution of the identity (RI) approximation.

    Availability and Restrictions

    Versions

    These versions are currently available (S means serial executables, O means OpenMP executables, and P means parallel MPI executables):

    Version Owens Pitzer
    7.1 SOP  
    7.2.1 SOP*  
    7.3   SOP*
    * Current default version

    You can use module spider turbomole to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access for Academic Users

    Use of Turbomole for academic purposes requires validation. In order to obtain validation, please contact OSC Help for further instruction. 

    Publisher/Vendor/Repository and License Type

    COSMOlogic, Commercial

    Usage

    Usage on Owens and Pitzer

    Set-up on Owens

    To load the default version of Turbomole module on Owens, use module load turbomole for both serial and parallel programs. To select a particular software version, use module load turbomole/version. For example, use   module load turbomole/7.1 to load Turbomole version 7.1 for both serial and parallel programs on Owens. 

    Using Turbomole on Owens

    To execute a turbomole program:

    module load turbomole
    <turbomole command>
    

    Batch Usage on Owens

    When you log into owens.osc.edu you are actually logged into a linux box referred to as the login node. To gain access to the mutiple processors in the computing environment, you must submit your job to the batch system for execution. Batch jobs can request mutiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations and Batch Limit Rules for more info.  Batch jobs run on the compute nodes of the system and not on the login node. It is desirable for big problems since more resources can be used.

    Interactive Batch Session

    For an interactive batch session one can run the following command:

    qsub -I -l nodes=1:ppn=28 -l walltime=00:20:00
    

    which requests 28 cores (-l nodes=1:ppn=28), for a walltime of 20 minutes (-l walltime=00:20:00). You may adjust the numbers per your need.

    Sample batch scripts and input files are available here:

    ~srb/workshops/compchem/turbomole/
    

    Note for Slurm job script

    Upon Slurm migration, the presets for parallel jobs are not compatiable with Slurm environment of Pitzer. Users must set up parallel environment explicitly to get correct TURBOMOLE binaries. 

    To set up a MPI case, add the following to a job script:

    export PARA_ARCH=MPI
    export PATH=$TURBODIR/bin/`sysname`:$PATH
    

    An example script:

    #!/bin/bash
    #SBATCH --job-name="turbomole_mpi_job"
    #SBATCH --nodes=2
    #SBATCH --time=0:10:0
    
    module load intel
    module load turbomole/7.3
    
    export PARA_ARCH=MPI
    export PATH=$TURBODIR/bin/`sysname`:$PATH
    export PARNODES=$SLURM_NTASKS
    
    dscf
    
    

    To set up a  SMP (OpenMP) case, add the following to a job script:

    export PARA_ARCH=SMP
    export PATH=$TURBODIR/bin/`sysname`:$PATH

    An example script to run a SMP job on an exclusive node:

    #!/bin/bash
    #SBATCH --job-name="turbomole_smp_job"
    #SBATCH --nodes=1
    #SBATCH --exclusive
    #SBATCH --time=0:10:0
    
    module load intel
    module load turbomole/7.3
    
    export PARA_ARCH=SMP
    export PATH=$TURBODIR/bin/`sysname`:$PATH
    export OMP_NUM_THREADS=$SLURM_CPUS_ON_NODE
    
    dscf
    

    Further Reading

     
    Supercomputer: 
    Service: 

    USEARCH

    USEARCH is a sequence analysis tool that offers high-throughput search and clustering algorithms to analyze data.

    Availability and Restrictions

    Versions

    USEARCH is available on the Owens cluster. The versions currently available at OSC are:

    Version Owens
    10.0.240 X*
    * Current Default Version

    You can use module spider usearch to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    USEARCH is available to all academic OSC users.

    Publisher/Vendor/Repository and License Type

    drive5, Commercial

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of USEARCH, use command module load usearch. This will load the default version.

    Using USEARCH

    Due to licensing restrictions, USEARCH may ONLY be used via the batch system on Owens. See below for information on how this is done.

    Batch Usage

    OSC has a 1-user license for USEARCH. However, there is no enforcement mechanism. In order for us to stay within the 1-user limit, we require you to run in the context of SLURM and to include this option when starting your batch job (the SLURM system will enforce the 1 user limit):

    #SBATCH -L usearch@osc:1
    
    Non-interactive Batch Job

    Use the script below as a template for your usage.

    #!/bin/bash
    #SBATCH -t 1:00:00
    #SBATCH --nodes=1 --ntask-per-node=28
    #SBATCH -L usearch@osc:1
    #SBATCH --job-name=usearch
    
    module load usearch
    # sample usearch command
    usearch -cluster_fast data.fa -id 0.9 -centroids output.fa
    

    Further Reading

    Supercomputer: 

    Unblur

    Unblur is used to align the frames of movies recorded on an electron microscope to reduce image blurrig due to beam-induced motion. It reads stacks of movies that are stored in MRC/CCP4 format. Unblur generates frame sums that can be used in subsequent image processing steps and optionally applies an exposure-dependent filter to maximize the signal at all resolutions in the frame averages. Movie frame sums can also be calculated using Summovie, which uses the alignment resuls from a prior run of Unblur.

    Availability & Restrictions

    Versions

    The following version of Boost are available on OSC systems:

    Version Pitzer
    1.0.2  

     

    You can use module spider unblur to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Unblur is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    This software is subject to Janelia Farm Research Campus Software Copyright 1.1. Full details of this license can be found using this link.

    Usage

    Usage on Pitzer

    Set-up

    To configure your environment for use of Unblur, run the following command: module load unblur. The default version will be loaded.

    To select a particular Unblur version, use module load unblur/version. For example, use module load unblur/1.0.2 to load Unblur 1.0.2.

    Further Reading

     

    Supercomputer: 
    Service: 

    VASP

    The Vienna Ab initio Simulation Package, VASP, is a suite for quantum-mechanical molecular dynamics (MD) simulations and electronic structure calculations.

    Availability and Restrictions

    Access

    Due to licensing considerations, OSC does not provide general access to this software.

    However, we are available to assist with the configuration of individual research-group installations on all our clusters. See the VASP FAQ page for information regarding licensing.

    Usage

    Using VASP

    See the VASP documentation page for tutorial and workshop materials.

    Building and Running VASP

    If you have a VASP license you may build and run VASP on any OSC cluster. The instructions given here are for VASP 5.4.1; newer 5 versions should be similar; and we have several reports that these worked for VASP 6.3.2.

    Most VASP users at OSC run VASP with MPI and without multithreading. If you need assistance with a different configuration, please contact oschelp@osc.edu.  Note that we recommend submitting a batch job for testing because running parallel applications from a login node is problematic.

    You can build and run VASP using either IntelMPI or MVAPICH2. Performance is similar for the two MPI families. Instructions are given for both. The IntelMPI build is simpler and more standard. MVAPICH2 is the default MPI installation at OSC; however, VASP had failures with some prior versions, so building with the newest MVAPICH2, in particular 2.3.2 or newer, is recommended.

    Build instructions assume that you have already unpacked the VASP distribution and patched it if necessary and are working in the vasp directory. It also assumes that you have the default module environment loaded at the start.

    Building with IntelMPI

    1. Copy arch/makefile.include.linux_intel and rename it makefile.include.

    2. Edit makefile.include to replace the two lines

    OBJECTS = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o \
    $(MKLROOT)/interfaces/fftw3xf/libfftw3xf_intel.a

    with one line

    OBJECTS = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o
    

    3. Make sure the FCL line is

    FCL = mpiifort -mkl=sequential
    

    4. Load modules and build the code (using the latest IntelMPI may yield the best performance; for VASP 5.4.1 the modules were intel/19.0.5 and intelmpi/2019.3 as of October 2019)

    module load intelmpi
    make
    

    5. Add the modules used for the build, e.g., module load intelmpi, to your job script.

    Building with MVAPICH2

    1. Copy arch/makefile.include.linux_intel and rename it makefile.include.

    2. Edit makefile.include to replace mpiifort with mpif90

    FC         = mpif90
    FCL        = mpif90 -mkl=sequential
    

    3. Replace the BLACS, SCALAPACK, OBJECTS, INCS and LLIBS lines with

    BLACS      =
    SCALAPACK  = $(SCALAPACK_LIBS)
    
    OBJECTS    = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o
    INCS       = $(FFTW3_FFLAGS)
    
    LLIBS      = $(SCALAPACK) $(FFTW3_LIBS_MPI) $(LAPACK) $(BLAS)

    4. Load modules and build the code (using the latest MVAPICH2 is recommended; for VASP 5.4.1 the modules were intel/19.0.5 and mvapich2/2.3.2 as of October 2019)

    module load scalapack
    module load fftw3
    make

    5. Add the modules used for the build, e.g., module load scalapack fftw3, to your job script.

    Building for GPUs

    The "GPU Stuff" section in arch/makefile.include.linux_intel_cuda is generic.  It can be updated for OSC clusters using the environment variables defined by a cuda module.  The OSC_CUDA_ARCH environment variables defined by cuda modules on all clusters show the specific CUDA compute capabilities.  Below we have combined them as of February 2023 so that the resulting executable will run on any OSC cluster.  In addition to the instructions above, here are the specific CUDA changes and the commands for building a gpu executable.

    Edits:

    CUDA_ROOT         = $(CUDA_HOME)
    GENCODE_ARCH      = -gencode=arch=compute_35,code=\"sm_35,compute_35\" \
                        -gencode=arch=compute_60,code=\"sm_60,compute_60\" \
                        -gencode=arch=compute_70,code=\"sm_70,compute_70\" \
                        -gencode=arch=compute_80,code=\"sm_80,compute_80\"
    

    Commands:

    module load cuda
    make gpu

    See this VASP Manual page and this NVIDIA page for reference.

    Running VASP generally

    Be sure to load the appropriate modules in your job script based on your build configuration, as indicated above. If you have built with -mkl=sequential you should be able to run VASP as follows:

    mpiexec path_to_vasp/vasp_std

    If you have a problem with too many threads you may need to add this line (or equivalent) near the top of your script:

    export OMP_NUM_THREADS=1

    Running VASP with GPUs

    See this VASP Manual page and this NVIDIA page for feature restrictions, input requirements, and performance tuning examples.  To acheive maximum performance, benchmarking of your particular calculation is essential.  As a point of reference, although GPUs are the scarce resource, some user reports are that optimal performance is with 3 or 4 MPI ranks per GPU.  This is expected to depend on method and simulation size.

    If you encounter a CUDA error running a GPU enabled executable, such as:

    CUDA Error in cuda_mem.cu, line 44: all CUDA-capable devices are busy or unavailableFailed to register pinned memory!

    then you may need to use the default compute mode which can be done by adding this line (or equivalent) near the top of your script, e.g., for Owens:

    #SBATCH --nodes=1 --ntasks-per-node=28 --gpus-per-node=1 --gpu_cmode=shared
    

     

    Known Issues

    VASP job with Out-of-Memory crashes Owens Compute nodes

    There is a bug with VASP 5.4.1 built with mvapich2/2.2 on Owens such that the VASP job with out-of-memory issue crashes the Owens compute node(s). We suggest to use a newer version of VASP.
     

    Further Reading

    See Also

    Service: 

    VCFtools

    VCFtools is a program package designed for working with VCF files, such as those generated by the 1000 Genomes Project. The aim of VCFtools is to provide easily accessible methods for working with complex genetic variation data in the form of VCF files.

    Availability and Restrictions

    The following versions of VCFtools are available on OSC clusters:

    Version Owens Pitzer
    0.1.14 X X
    0.1.16 X* X*
    * Current default version

    You can use  module spider vcftools to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    VCFtools is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Adam Auton, Petr Danecek, Anthony Marcketta/ Open source

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of VCFtools, run the following command: module load vcftools. The default version will be loaded. To select a particular VCFtools version, use module load vcftools/version . For example, use module load vcftools/0.1.14 to load VCFtools 0.1.14.

    Usage on Pitzer

    Set-up

    To configure your environment for use of VCFtools, run the following command: module load vcftools. The default version will be loaded.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    VMD

    VMD is a visulaization program for the display and analysis of molecular systems.

    Availability and Restrictions

    Versions

    The following versions of VMD are available on OSC clusters:

    Version Owens Pitzer
    1.9.3 X X
    1.9.4 (alpha) X* X*
    * Current default version

    Access

    VMD is for academic purposes only. Please review the license agreement before you use this software.

    Publisher/Vendor/Repository and License Type

    TCBG, Beckman Institute/ Open source

    Usage

    Usage on Owens and Pitzer

    Using VMD with OSC OnDemand

    It is recommended to use VMD with OSC OnDemand. On the OnDemand page launch the VMD GUI from the interactive apps dropdown menu. This will open the VMD Main, OpenGL Display, and terminal windows. End a session through the VMD Main window by selecting File → Quit.

    See VMD Tutorials for basic VMD usage instructions.

    Further Reading 

    Supercomputer: 
    Technologies: 
    Fields of Science: 

    VarScan

    VarScan is a platform-independent software tool developed at the Genome Institute at Washington University to detect variants in NGS data.

    Availability and Restrictions

    Versions

    The following versions of VarScan are available on OSC clusters:

    Version Owens
    2.4.1 X*
    * Current default version

    You can use  module spider varscan to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    VarScan is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    http://varscan.sourceforge.net, Open source

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of VarScan, run the following command: module load varscan. The default version will be loaded. To select a particular VarScan version, use module load varscan/version. For example, use module load varscan/2.4.1 to load VarScan 2.4.1.

    Usage

    This software is a Java executable .jar file; thus, it is not possible to add to the PATH environment variable.

    From module load varscan, a new environment variable, VARSCAN, will be set.

    Thus, users can use the software by running the following command:  java -jar $VARSCAN {other options}.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    VirtualGL

    VirtualGL allows OpenGL applications to run with 3D hardware accerlation.

    Availability & Restrictions

    Versions

    The following versions of VirtualGL are available on OSC clusters:

    Version Owens Pitzer Notes
    2.5.2 X    
    2.6   X  
    2.6.3 X    
    2.6.5 X* X*  
    * Current default version

    Access

    OSC provides VirtualGL to all OSC users.

    Publisher/Vendor/Repository and License Type

    Julian Smart, Robert Roebling et al., Open source, LGPL v2.1

    Usage

    Usage on Owens 

    Set-up

    Configure your environment for use of VirtualGL with  module load virtualgl. This will load the default version.

    Run a OpenGL program

    User must invoke vglrun command to run a OpenGL program with VirtualGL in a Virtual Desktop Interface (VDI) app or an Interactive HPC 'vis' type Desktop app, e.g.

    $ module load virtualgl
    $ vglrun glxinfo |grep OpenGL
    OpenGL vendor string: NVIDIA Corporation
    OpenGL renderer string: Tesla V100-PCIE-16GB/PCIe/SSE2
    OpenGL core profile version string: 4.6.0 NVIDIA 450.80.02
    OpenGL core profile shading language version string: 4.60 NVIDIA
    OpenGL core profile context flags: (none)
    OpenGL core profile profile mask: core profile
    OpenGL core profile extensions:
    OpenGL version string: 4.6.0 NVIDIA 450.80.02
    OpenGL shading language version string: 4.60 NVIDIA
    OpenGL context flags: (none)
    OpenGL profile mask: (none)
    OpenGL extensions:
    OpenGL ES profile version string: OpenGL ES 3.2 NVIDIA 450.80.02
    OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
    
    

    Usage on Pitzer

    Set-up

    Configure your environment for use of VirtualGL with  module load virtualgl. This will load the default version.

    Run a OpenGL program

    User must invoke vglrun command to run a OpenGL program with VirtualGL in a Virtual Desktop Interface (VDI) app or an Interactive HPC 'vis' type Desktop app, e.g.

    $ module load virtualgl
    $ vglrun glxinfo |grep OpenGL
    OpenGL vendor string: NVIDIA Corporation
    OpenGL renderer string: Tesla V100-PCIE-16GB/PCIe/SSE2
    OpenGL core profile version string: 4.6.0 NVIDIA 450.80.02
    OpenGL core profile shading language version string: 4.60 NVIDIA
    OpenGL core profile context flags: (none)
    OpenGL core profile profile mask: core profile
    OpenGL core profile extensions:
    OpenGL version string: 4.6.0 NVIDIA 450.80.02
    OpenGL shading language version string: 4.60 NVIDIA
    OpenGL context flags: (none)
    OpenGL profile mask: (none)
    OpenGL extensions:
    OpenGL ES profile version string: OpenGL ES 3.2 NVIDIA 450.80.02
    OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
    

    Further Reading 

    Supercomputer: 

    VisIt

    VisIt is an Open Source, interactive, scalable, visualization, animation and analysis tool for visualizing data defined on two- and three-dimensional structured and unstructured meshes.

     

    Availability and Restrictions

    Versions

    The following versions of Blender are available on OSC systems: 

    Version Owens Pitzer
    2.11.0 X*  
    2.13.0 X  
    3.14 X X*
    * Current default version

    You can use module spider visit to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    VisIt is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Lawrence Livermore National Laboratory, BSD-3 License

    Usage

    Set-up

    We recomend users to run VisIt locally and connect to OSC clusters for data analysis. In this Client-Server Mode, users can visualize data stored on the clusters without download it. 

    Install VisIt locally 

    Downloadand install a binary distribution locally. The supported versions on OSC clusters are listed above. If you are using an unmatched version, there might be a compatibility issue. During the installation, you will be asked to pick a host profile from a list of computing centers. Please select Ohio Supercomputer Center (OSC) network to continue. If you are using any version prior to 3.2.2, the existing OSC profile is outdated and is not compatible with the current batch scheduler. Please refer to the following section to obtain the up-to-date profiles.

    Update host profiles (for version prior to 3.2.2)

    Please download the new OSC profiles for Owens and Pitzer and place them in $HOME/.visit/hosts if you are using macOS or Linux, or in <visit_installaion>\hostsAfter relaunching VisIt, you should see new profiles namedOSC Owens and OSC Pitzer.

    Further Reading

    Tag: 
    Supercomputer: 
    Service: 
    Fields of Science: 

    WARP3D

    From WARP3D's webpage:

    WARP3D is under continuing development as a research code for the solution of large-scale, 3-Dsolid models subjected to static and dynamic loads. The capabilities of the code focus on 
    fatigue & fracture analyses primarily in metals. WARP3D runs on laptops-to-supercomputers and can analyze models with several million nodes and elements. 
    

    Availability and Restrictions

    Versions

    The following versions of WARP3D are available on OSC clusters:

    Version Owens Pitzer
    17.7.1 X  
    17.7.4 X  
    17.8.0 X  
    17.8.7 X X
    q* Default version depends on the compiler and MPI version loaded

    You can use module spider warp3d to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access 

    WARP3D is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    University of Illinois at Urbana-Champaign, Open source

    Usage

    Usage on Owens

    Setup on Owens

    To configure the Owens cluster for the use of WARP3D, use the following commands:

    module load intel
    module load intelmpi
    module load warp3d
    

    Batch Usage on Owens

    Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations for OakleyQueues and Reservations for Ruby, and Scheduling Policies and Limits for more info.

    Running WARP3D

    Below is an example batch script (job.txt) for using WARP3D:

    #!/bin/bash
    #SBATCH --job-name WARP3D
    #SBATCH --nodes=1 --ntasks-per-node=28
    #SBATCH --time=30:00
    #SBATCH --account <project-account>
     
    # Load the modules for WARP3D
    module load intel/18.0.3
    module load intelmpi/2018.0
    module load warp3d
    # Copy files to $TMPDIR and move there to execute the program
    cp $WARP3D_HOME/example_problems_for_READMEs/mt_cohes_*.inp $TMPDIR
    cd $TMPDIR
    # Run the solver using 4 MPI tasks and 6 threads per MPI task 
    $WARP3D_HOME/warp3d_script_linux_hybrid 4 6 < mt_cohes_4_cpu.inp
    # Finally, copy files back to your home directory 
    cp -r * $SLURM_SUBMIT_DIR
    

    In order to run it via the batch system, submit the job.txt file with the following command:

    sbatch job.txt
    

    Usage on Pitzer

    Setup on Pitzer

    To configure the Owens cluster for the use of WARP3D, use the following commands:

    module load intel
    module load intelmpi
    module load warp3d
    

    Batch Usage on Pitzer

    Batch jobs can request multiple nodes/cores and compute time up to the limits of the OSC systems. Refer to Queues and Reservations for OakleyQueues and Reservations for Ruby, and Scheduling Policies and Limits for more info.

    Running WARP3D

    Below is an example batch script (job.txt) for using WARP3D:

    #!/bin/bash
    #SBATCH --job-name WARP3D 
    #SBATCH --nodes=1 --ntasks-per-node=40 
    #SBATCH --time=30:00
    #SBATCH --account <project-account>
    
    # Load the modules for WARP3D
    module load intel
    module load intelmpi
    module load warp3d
    # Copy files to $TMPDIR and move there to execute the program
    cp $WARP3D_HOME/example_problems_for_READMEs/mt_cohes_*.inp $TMPDIR
    cd $TMPDIR
    # Run the solver using 4 MPI tasks and 6 threads per MPI task 
    $WARP3D_HOME/warp3d_script_linux_hybrid 4 6 < mt_cohes_4_cpu.inp
    # Finally, copy files back to your home directory 
    cp -r * $SLURM_SUBMIT_DIR
    

    In order to run it via the batch system, submit the job.txt file with the following command:

    sbatch job.txt

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    WCStools

    WCStools is a program package designed for working with Images and the World Coordinate System. The aim of WCStools is to provide methods for relating pixels taken common astronomical images to sky coordinates.

    Availability and Restrictions

    The following versions of WCStools are available on OSC clusters:

    Version Owens Pitzer
    3.9.7 X* X*
    * Current default version

    You can use  module spider wcstools to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    WCStools is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Jessica Mink, Smithsonian Astrophysical Observatory/ Open source

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of WCStools, run the following command: module load wcstools. The default version will be loaded. To select a particular WCStools version, use module load wcstools/version . For example, use module load wcstools/3.9.7 to load WCStools 3.9.7.

    Usage on Pitzer

    Set-up

    To configure your environment for use of WCStools, run the following command: module load wcstools. The default version will be loaded. To select a particular WCStools version, use module load wcstools/version . For example, use module load wcstools/3.9.7 to load WCStools 3.9.7.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    Wine

    Wine is a open-source compatibility layer that allows Windows applications  to run on Unix-like operating system without a copy of Microsoft Windows.

    Availability and Restrictions

    Versions

    Version Owens Pitzer Note
    3.0.2 X    
    4.0.3 X    
    5.1 X*   only support 64-bit Windows binaries
    6.0 X X*
    * Current default version

    You can use module spider wine to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    Wine is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    The Wine project authors, Open source

    Usage

    Set-up

    In OnDemand Desktop app, run the following command:

    module load wine/version
    

    Using Wine

    Please note that for the versions 3.0.2, 4.0.3 and 5.1 , Wine are built with --enable-win64 and so they cannot run Windows 32-bit binaries.  You can run the following command to execute a Windows 64-bit binary: 

    wine64 /path/to/window_64bit_exe
    

    Starting with 6.0, Wine is built with Mono and Gecko. We recommend to run wineboot -u to set up these libraries in your wine prefix.

    Use other directory for C:\

    You can change the default wine prefix $HOME/.wine to other directories:

    mkdir -p $TMPDIR/my_wine_tmp
    module load wine/6.0
    export WINEPREFIX=$TMPDIR/my_wine_tmp
    wine wineboot -u
    wine winecfg

    Further Reading

     

     
    Tag: 
    Supercomputer: 
    Service: 
    Fields of Science: 

    XFdtd

    XFdtd is an electromagnetic simulation solver. Its features analyze problems in antenna design and placement, biomedical and SAR, EMI/EMC, microwave devices, radar and scattering, automotive radar, and more.

    Availability and Restrictions

    Versions

    The following versions of XFdtd are available on OSC clusters:

    Version Owens Pitzer
    7.8.1.4 X* X*
    7.9.0.6 X X
    7.9.2.2 X X
    7.10.2.3 X X
    * Current default version

    You can use  module spider xfdtdto view available modules for a given machine. We have a perpetual license file for the currently installed versions but without maintenance license. Thus, our support for XFdtd would be limited including version updates. 

    Access

    Use of xfdtd for academic purposes requires validation. In order to obtain validation, please contact OSC Help for further instruction. 

    Publisher/Vendor/Repository and License Type

    Remcom Inc., Commercial

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of XFdtd, run the following command: module load xfdtd. The default version will be loaded. To specify a particular version, use the following command: module load xfdtd/version.

    Usage on Pitzer

    Set-up

    To configure your environment for use of XFdtd, run the following command: module load xfdtd. The default version will be loaded. To specify a particular version, use the following command: module load xfdtd/version.

    Further Reading

    Supercomputer: 

    amdblis

    AMDBLIS is a portable, open-source software framework for instantiating high-performance Basic Linear Algebra Subprograms (BLAS), such as dense linear algebra libraries. The framework was designed to isolate essential kernels of computation that, when optimized, immediately enable optimized implementations of most of the commonly used and computationally-intensive operations.

    Availability and Restrictions

    Versions

    amdblis is available on the Ascend Cluster. The versions currently available at OSC are:

    Version Ascend
    3.1 X*

    * Current Default Version

    You can use module spider amdblis to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    amdblis is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    AMD, amdblis uses the 3-clause BSD license; the full license is in here.  

    Usage

    Supercomputer: 

    aocc

    The AMD Optimizing C/C++ and Fortran Compilers (“AOCC”) are a set of production compilers optimized for software performance when running on AMD host processors using the AMD “Zen” core architecture.  Supported processor families are AMD EPYC™, AMD Ryzen™, and AMD Ryzen™ Threadripper™ processors.  The AOCC compiler environment simplifies and accelerates development and tuning for x86 applications built with C, C++, and Fortran languages.

    Availability and Restrictions

    Versions

    aocc is available on the Pitzer, Owens, and Ascend Cluster. The versions currently available at OSC are:

    Version Ascend
    3.2.0 X*

    * Current Default Version

    You can use module spider aocc to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    aocc is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    AMD, Please review the license agreement carefully before use.

    Usage

    Supercomputer: 

    bam2fastq

    bam2fastq is used to extract raw sequences (with qualities) from programs like SAMtools, Picard, and Bamtools.

    Availability and Restrictions

    Versions

    The following versions of bam2fastq are available on OSC clusters:

    Version Owens Pitzer
    1.1.0 X* X*
    * Current default version

    You can use module spider bam2fastq to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    bam2fastq is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Genomic Services Lab at Hudson Alpha, Open source

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of bam2fastq, run the following command: module load bam2fastq. The default version will be loaded. To select a particular bam2fastq version, use module load bam2fastq/version. For example, use module load bam2fastq/1.1.0 to load bam2fastq 1.1.0.

    Usage on Pitzer

    Set-up

    To configure your environment for use of bam2fastq, run the following command: module load bam2fastq. The default version will be loaded. To select a particular bam2fastq version, use module load bam2fastq/version. For example, use module load bam2fastq/1.1.0 to load bam2fastq 1.1.0.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    bcftools

    bcftools is a set of utilities that manipulate variant calls in the Variant Call Format (VCF) and its binary counterpart BCF.

    Availability and Restrictions

    Versions

    The following versions of bcftools are available on OSC clusters:

    Version Owens Pitzer
    1.3.1 X*  
    1.9   X*
    1.16 X X
    * Current default version

    You can use module spider bcftools to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    bcftools is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Genome Research Ltd., Open source

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of bcftools, run the following command: module load bcftools. The default version will be loaded. To select a particular bcftools version, use module load bcftools/version. For example, use module load bcftools/1.3.1 to load bcftools 1.3.1.

    Usage on Pitzer

    Set-up

    To configure your environment for use of bcftools, run the following command: module load bcftools. The default version will be loaded.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    bedtools

    Collectively, the bedtools utilities are a swiss-army knife of tools for a wide-range of genomics analysis tasks. The most widely-used tools enable genome arithmetic: that is, set theory on the genome. While each individual tool is designed to do a relatively simple task, quite sophisticated analyses can be conducted by combining multiple bedtools operations on the UNIX command line.

    Availability and Restrictions

    Versions

    The following versions of bedtools are available on OSC clusters:

    Version Owens
    2.25.0 X
    2.29.2 X*
    * Current default version

    You can use module spider bedtools to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    bedtools is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Aaron R. Quinlan and Neil Kindlon, Open source

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of bedtools, run the following command: module load bedtools. The default version will be loaded. To select a particular bedtools version, use module load bedtools/version. For example, use module load bedtools/2.25.0 to load bedtools 2.25.0.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    dcm2nii

    dcm2niix is designed to convert neuroimaging data from the DICOM format to the NIfTI format. The DICOM format is the standard image format generated by modern medical imaging devices. However, DICOM is very complicated and has been interpreted differently by different vendors. The NIfTI format is popular with scientists, it is very simple and explicit. However, this simplicity also imposes limitations (e.g. it demands equidistant slices). dcm2niix is also able to generate a BIDS JSON format sidecar which includes relevant information for brain scientists in a vendor agnostic and human readable form. The Neuroimaging DICOM and NIfTI Primer provides details.

    Availability and Restrictions

    Versions

    dcm2nii is available on the Pitzer Cluster. The versions currently available at OSC are:

    Version Pitzer
    11_04_2023 X*

    * Current default version

    You can use module spider dcm2nii to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access: Anyone Can Use

    All users can use dcm2nii at OSC. If you have any questions, please contact OSC Help

    Publisher/Vendor/Repository and License Type

    This software is open source. The bulk of the code is covered by the BSD license. Some units are either public domain (nifti*.*, miniz.c) or use the MIT license (ujpeg.cpp). See the source GitHub repository for more details.

    Supercomputer: 

    eXpress

    eXpress is a streaming tool for quantifying the abundances of a set of target sequences from sampled subsequences.

    Availability and Restrictions

    Versions

    The following versions of eXpress are available on OSC clusters:

    Version Owens
    1.5.1 X*
    * Current default version

    You can use  module spider express to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    eXpress is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Adam Roberts and Lior Pachter, Open source

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of eXpress, run the following command: module load express. The default version will be loaded. To select a particular eXpress version, use module load express/version. For example, use module load express/1.5.1 to load eXpress 1.5.1.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    fMRIPrep

    fMRIPrep is a functional magnetic resonance imaging (fMRI) data preprocessing pipeline that is designed to provide an easily accessible, state-of-the-art interface that is robust to variations in scan acquisition protocols and that requires minimal user input, while providing easily interpretable and comprehensive error and output reporting.

    Availability and Restrictions

    Versions

    The following versions of fMRIPrep are available on OSC systems: 

    Version Pitzer
    20.2.0 X*
    * Current default version

    You can use module spider fMRIPrep to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    fMRIPrep is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Developed at Poldrack Lab at Stanford University, for use at the Center for Reproducible Neuroscience (CRN), as well as for open-source software distribution.

    fMRIPrep uses the 3-clause BSD license; the full license may be found in the LICENSE file in the fMRIPrep distirbution.

    All trademarks referenced herein are property of their respective holders.

    Copyright (c) 2015-2020, the fMRIPrep developers and the CRN. All rights reserved.

    Usage

    Usage on Pitzer

    Set-up

    To configure your environment for use of fMRIPrep, run the following command: module load fmriprep. The default version will be loaded. To select a particular fMRIPrep version, use module load fmriprep/version. For example, use module load fmriprep/20.2.0 to load fMRIPrep 20.2.0.

    fMRIPrep is installed in a singularity container.  FMRIPREP_IMG environment variable contains the container image file path. So, an example usage would be

    module load fmriprep
    singularity exec $FMRIPREP_IMG fmriprep --help
    

    For more information about singularity usages, please read OSC singularity page.

    Further Reading

     

    Supercomputer: 
    Service: 
    Fields of Science: 

    ffmpeg

    FFmpeg is a free software project, the product of which is a vast software suite of libraries and programs for handling video, audio, and other multimedia files and streams.

    Availability and Restrictions

    Versions

    The following versions of FFmpeg are available on OSC clusters:

    Version Owens Ascend
    2.8.12  X*  
    4.0.2 X  
    4.1.3-static X  
    4.3.2   X
    6.1.1    X*
    * Current default version

    You can use  module spider ffmpeg to view available modules for a given machine. The static version is built by John Van Sickle, providing full FFmpeg features.  The non-static version is built on OSC systems and is useful for code development.  Feel free to contact OSC Help if you need other versions for your work.

    Access for Academic Users

    FFmpeg is available to all OSC users.  

    Publisher/Vendor/Repository and License Type

    https://www.ffmpeg.org/ Open source (academic)

    Usage

    Usage on Owens/Ascend

    Set-up

    To configure your environment for use of FFmpeg, run the following command:  module load ffmpeg. The default version will be loaded. 
     
    Further Reading
    Supercomputer: 
    Service: 
    Fields of Science: 

    metilene

    metilene is a software tool to annotate differentally methylated regions and differentially methylated CpG sites.

    Availability and Restrictions

    Versions

    The following versions of bedtools are available on OSC clusters:

    Version Owens Pitzer
    0.2-7 X* X*
    * Current default version

    You can use module spider metilene to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    metilene is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Frank Jühling, Helene Kretzmer, Stephan H. Bernhart, Christian Otto, Peter F. Stadler & Steve Hoffmann, GNU GPL v2.0 license

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of metilene, run the following command: module load metilene. The default version will be loaded. To select a particular metileneversion, use module load metilene/version. For example, use module load metilene/0.2-7 to load metilene 0.2-7.

    Usage on Pitzer

    Set-up

    To configure your environment for use of metilene, run the following command: module load metilene. The default version will be loaded. To select a particular metileneversion, use module load metilene/version. For example, use module load metilene/0.2-7 to load metilene 0.2-7.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    miRDeep2

    miRDeep2 is a completely overhauled tool which discovers microRNA genes by analyzing sequenced RNAs. The tool reports known and hundreds of novel microRNAs with high accuracy in seven species representing the major animal clades. The low consumption of time and memory combined with user-friendly interactive graphic output makes miRDeep2 accessible for straightforward application in current reasearch.

    Availability and Restrictions

    Versions

    The following versions of miRDeep2 are available on OSC clusters:

    Version Owens
    2.0.0.8 X*
    * Current default version

    You can use module spider mirdeep2 to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    miRDeep2 is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Marc Friedlaender and Sebastian Mackowiak, freeware

    Usage

    Usage on Owens

    Set-up

    To configure your environment for use of miRDeep2, run the following command: module load mirdeep2. The default version will be loaded. To select a particular miRDeep2 version, use module load mirdeep2/version. For example, use module load mirdeep2/2.0.0.8 to load miRDeep2 2.0.0.8.

    Further Reading

    Supercomputer: 
    Service: 
    Fields of Science: 

    nccl

    The NVIDIA Collective Communication Library (NCCL) implements multi-GPU and multi-node communication primitives optimized for NVIDIA GPUs and Networking. NCCL provides routines such as all-gather, all-reduce, broadcast, reduce, reduce-scatter as well as point-to-point send and receive that are optimized to achieve high bandwidth and low latency over PCIe and NVLink high-speed interconnects within a node and over NVIDIA Mellanox Network across nodes.

    Availability and Restrictions

    Versions

    nccl is available on the Owens, Pitzer, and Ascend Clusters. The versions currently available at OSC are:

    Version Pitzer Owens Ascend
    2.11.4 X* X*  
    2.11.4-1     X*

    * Current Default Version

    You can use module spider nccl to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    nccl is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    NVIDIA, see NVIDIA's links listed here for licensing.

    SLA
    This document is the Software License Agreement (SLA) for NVIDIA NCCL. The following contains specific license terms and conditions for NVIDIA NCCL. By accepting this agreement, you agree to comply with all the terms and conditions applicable to the specific product(s) included herein.
     
    BSD License
    This document is the Berkeley Software Distribution (BSD) license for NVIDIA NCCL. The following contains specific license terms and conditions for NVIDIA NCCL open sourced. By accepting this agreement, you agree to comply with all the terms and conditions applicable to the specific product(s) included herein.

    Usage

    Supercomputer: 

    nvhpc

    NVHPC, or NVIDIA HPC SDK, C, C++, and Fortran compilers support GPU acceleration of HPC modeling and simulation applications with standard C++ and Fortran, OpenACC® directives, and CUDA®. GPU-accelerated math libraries maximize performance on common HPC algorithms, and optimized communications libraries enable standards-based multi-GPU and scalable systems programming. Performance profiling and debugging tools simplify porting and optimization of HPC applications, and containerization tools enable easy deployment on-premises or in the cloud. With support for NVIDIA GPUs and Arm, OpenPOWER, or x86-64 CPUs running Linux, the HPC SDK provides the tools you need to build NVIDIA GPU-accelerated HPC applications.

    Availability and Restrictions

    Versions

    nvhpc is available on the Ascend Cluster. The versions currently available at OSC are:

    Version Ascend
    21.9 X*

    * Current Default Version

    You can use module spider nvhpc to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    nvhpc is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    NVIDIA, Please review the license agreement carefully before use.

    Usage

    Supercomputer: 

    oneAPI

    oneAPI is an open, cross-industry, standards-based, unified, multiarchitecture, multi-vendor programming model that delivers a common developer experience across accelerator architectures – for faster application performance, more productivity, and greater innovation. The oneAPI initiative encourages collaboration on the oneAPI specification and compatible oneAPI implementations across the ecosystem.

    Availability and Restrictions

    Versions

    oneAPI is available on Owens, Pitzer and Ascend. The versions currently available at OSC are:

    Version Owens Pitzer Ascend
    2021.4.0     X*
    2022.0.2     X
    2022.1.2 X    
    2023.1.0     X
    2023.2.0 X X X
    2024.0.2 X* X* X

    * Current Default Version

    You can use module spider oneapi to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    oneAPI is available to all OSC users. If you have any questions, please contact OSC Help.

    Publisher/Vendor/Repository and License Type

    Intel, see Intel's End User License Agreement page for information on the Licensing.

    Usage

    Tag: 
    Supercomputer: 

    parallel-command-processor

    There are many instances where it is necessary to run the same serial program many times with slightly different input. Parametric runs such as these either end up running in a sequential fashion in a single batch job, or a batch job is submitted for each parameter that is varied (or somewhere in between.) One alternative to this is to allocate a number of nodes/processors to running a large number of serial processes for some period of time. The command parallel-command-processor allows the execution of large number of independent serial processes in parallel. parallel-command-processor works as follows: In a parallel job with N processors allocated, the PCP manager process will read the first N-1 commands in the command stream and distribute them to the other N-1 processors. As processes complete, the PCP manager will read the next one in the stream and send it out to an idle processor core. Once the PCP manager runs out of commands to run, it will wait on the remaining running processes to complete before shutting itself down.

    Availability and Restrictions

    Parallel-Command-Processor is available for all OSC users.

    Publisher/Vendor/Repository and License Type

    Ohio Supercomputer Center, Open source

    Usage

    Here is an interactive batch session that demonstrates the use of parallel-command-processor with a config file, pconf. pconf contains several lines of simple commands, one per line. The output of the commands were redirected to individual files.

    -bash-3.2$ sinteractive -A <project-account> -N 2 -n 8  
    -bash-3.2$ cp pconf $TMPDIR
    -bash-3.2$ cd $TMPDIR
    -bash-3.2$ cat pconf
    ls / > 1 
    ls $TMPDIR > 2 
    ls $HOME > 3 
    ls /usr/local/ > 4 
    ls /tmp > 5 
    ls /usr/src > 6 
    ls /usr/local/src > 7
    ls /usr/local/etc > 8 
    hostname > 9 
    uname -a > 10 
    df > 11
    -bash-3.2$ module load pcp
    -bash-3.2$ srun parallel-command-processor pconf
    -bash-3.2$ pwd
    /tmp/pbstmp.1371894 
    -bash-3.2$ srun --ntasks=2 ls -l $TMPDIR 
    854 total 16 
    -rw------- 1 yzhang G-3040 1082 Feb 18 16:26 11
    -rw------- 1 yzhang G-3040 1770 Feb 18 16:26 4 
    -rw------- 1 yzhang G-3040 67 Feb 18 16:26 5
    -rw------- 1 yzhang G-3040 32 Feb 18 16:26 6 
    -rw------- 1 yzhang G-3040 0 Feb 18 16:26 7 
    855 total 28
    -rw------- 1 yzhang G-3040 199 Feb 18 16:26 1
    -rw------- 1 yzhang G-3040 111 Feb 18 16:26 10
    -rw------- 1 yzhang G-3040 12 Feb 18 16:26 2
    -rw------- 1 yzhang G-3040 87 Feb 18 16:26 3 
    -rw------- 1 yzhang G-3040 38 Feb 18 16:26 8
    -rw------- 1 yzhang G-3040 20 Feb 18 16:26 9
    -rw------- 1 yzhang G-3040 163 Feb 18 16:25 pconf 
    -bash-3.2$ exit

    As the command "srun --ntasks=2 ls -l $TMPDIR" shows, the output files are distributed on the two nodes. In a batch file, pbsdcp/sgather can be used to distribute-copy the files to $TMPDIR on all nodes of the job and gather output files once execution has completed. This step is important due to the load that executing many processes in parallel can place on the user home directories.

    Here is a slightly more complex example showing the usage of parallel-command-processor and pbsdcp/sgather:

    #!/bin/bash
    #SBATCH  --nodes=13 --ntasks-per-node=4 
    #SBATCH --time=1:00:00 
    #SBATCH -A <project-account> 
    
    
    date
    
    module load biosoftw 
    module load blast
    
    set -x
    
    pbsdcp -s query/query.fsa.* $TMPDIR 
    pbsdcp -s db/rice.* $TMPDIR 
    cd $TMPDIR
    
    for i in $(seq 1 49)
    
    do 
          cmd="blastall -p blastn -d rice -i query.fsa.$i -o out.$i" 
          echo ${cmd} >> runblast 
    done
    
    module load pcp
    srun parallel-command-processor runblast
    
    mkdir $SLURM_SUBMIT_DIR/output 
    sgather -r $TMPDIR $SLURM_SUBMIT_DIR/output
    
    date

    Further Reading

    The parallel-command-processor command is documented as a man page: man parallel-command-processor.

    Supercomputer: 
    Service: 

    xcpEngine

    The XCP imaging pipeline (XCP system) is a free, open-source software package for processing of multimodal neuroimages. The XCP system uses a modular design to deploy analytic routines from leading MRI analysis platforms, including FSL, AFNI, and ANTs.

    Availability and Restrictions

    Versions

    xcpEngine is available on Pitzer cluster. These are the versions currently available:

    Version Pitzer Notes
    1.2.3 X*  
    * Current default version

    You can use module spider xcpengine to view available modules for a given machine. Feel free to contact OSC Help if you need other versions for your work.

    Access

    xcpEngine is available to all OSC users.

    Publisher/Vendor/Repository and License Type

    xcpEngine is free and open source.

    © Copyright 2019, Rastko Ciric, Adon F. G. Rosen, Guray Erus, Matthew Cieslak, Azeez Adebimpe, Philip A. Cook, Danielle S. Bassett, Christos Davatzikos, Daniel H. Wolf, Theodore D. Satterthwaite

    Usage

    Usage on Pitzer

    Set-up

    To configure your environment for use of xcpEngine, run the following command:  module load xcpengine. The default version will be loaded. To select a particular version, use  module load xcpengine/version. For example, use  module load xcpengine/1.2.3 to load xcpEngine 1.2.3.

    xcpEngine is installed in a singularity container.  XCPENGINE_IMG environment variable contains the container image file path. So, an example usage would be

    module load xcpengine
    singularity run $XCPENGINE_IMG -h
    

    For more information about singularity usages, please read OSC singularity page, and you may find useful information about xcpEngine usages with the container from here.

    Further Reading

     
    Supercomputer: 
    Service: 
    Fields of Science: