×
  • Owens Information Transition

    Owens cluster will be decommissioned on February 3, 2025. Some pages may still reference Owens after Owens is decommissioned , and we are in the process of gradually updating the content. Thank you for your patience during this transition

  • Owens Information Transition

    Owens cluster will be decommissioned on February 3, 2025. Some pages may still reference Owens after Owens is decommissioned , and we are in the process of gradually updating the content. Thank you for your patience during this transition

  • Owens Information Transition

    Owens cluster will be decommissioned on February 3, 2025. Some pages may still reference Owens after Owens is decommissioned , and we are in the process of gradually updating the content. Thank you for your patience during this transition

  • Owens Information Transition

    Owens cluster will be decommissioned on February 3, 2025. Some pages may still reference Owens after Owens is decommissioned , and we are in the process of gradually updating the content. Thank you for your patience during this transition

  • Owens Information Transition

    Owens cluster will be decommissioned on February 3, 2025. Some pages may still reference Owens after Owens is decommissioned , and we are in the process of gradually updating the content. Thank you for your patience during this transition

  • Owens Information Transition

    Owens cluster will be decommissioned on February 3, 2025. Some pages may still reference Owens after Owens is decommissioned , and we are in the process of gradually updating the content. Thank you for your patience during this transition

  • Owens Information Transition

    Owens cluster will be decommissioned on February 3, 2025. Some pages may still reference Owens after Owens is decommissioned , and we are in the process of gradually updating the content. Thank you for your patience during this transition

  • Owens Information Transition

    Owens cluster will be decommissioned on February 3, 2025. Some pages may still reference Owens after Owens is decommissioned , and we are in the process of gradually updating the content. Thank you for your patience during this transition

  • Owens Information Transition

    Owens cluster will be decommissioned on February 3, 2025. Some pages may still reference Owens after Owens is decommissioned , and we are in the process of gradually updating the content. Thank you for your patience during this transition

  • Owens Information Transition

    Owens cluster will be decommissioned on February 3, 2025. Some pages may still reference Owens after Owens is decommissioned , and we are in the process of gradually updating the content. Thank you for your patience during this transition

Owens

Linaro MAP

Linaro MAP is a full scale profiler for HPC programs. We recommend using Linaro MAP after reviewing reports from Linaro Performance Reports. MAP supports pthreads, OpenMP, and MPI software on CPU, GPU, and MIC based architectures.

OpenMPI

MPI is a standard library for performing parallel processing using a distributed memory model. The Ruby, Owens, and Pitzer clusters at OSC can use the OpenMPI implementation of the Message Passing Interface (MPI).

PAPI

PAPI provides the tool designer and application engineer with a consistent interface and methodology for use of the performance counter hardware found in most major microprocessors. PAPI enables software engineers to see, in near real time, the relation between software performance and processor events.

This software will be of interest only to HPC experts.

ParMETIS / METIS

ParMETIS (Parallel Graph Partitioning and Fill-reducing Matrix Ordering) is an MPI-based parallel library that implements a variety of algorithms for partitioning unstructured graphs, meshes, and for computing fill-reducing orderings of sparse matrices. ParMETIS extends the functionality provided by METIS and includes routines that are especially suited for parallel AMR computations and large scale numerical simulations. The algorithms implemented in ParMETIS are based on the parallel multilevel k-way graph-partitioning, adaptive repartitioning, and parallel multi-constrained partitioning schemes developed in Karypis lab.

METIS (Serial Graph Partitioning and Fill-reducing Matrix Ordering) is a set of serial programs for partitioning graphs, partitioning finite element meshes, and producing fill reducing orderings for sparse matrices. The algorithms implemented in METIS are based on the multilevel recursive-bisection, multilevel k-way, and multi-constraint partitioning schemes developed in Karypis lab.

Firewall and Proxy Settings

Connections to OSC

In order for users to access OSC resources through the web your firewall rules should allow for connections to the following publicly-facing IP ranges.  Otherwise, users may be blocked or denied access to our services.

  • 192.148.248.0/24
  • 192.148.247.0/24
  • 192.157.5.0/25

The followingg TCP ports should be opened:

  • 80 (HTTP)
  • 443 (HTTPS)
  • 22 (SSH)

The following domain should be allowed:

SuiteSparse

SuiteSparse is a suite of sparse matrix algorithms, including: UMFPACK(multifrontal LU factorization), CHOLMOD(supernodal Cholesky, with CUDA acceleration), SPQR(multifrontal QR) and many other packages.

Software Refresh

OSC timely installs new software versions on OSC systems, and periodically do coordinated software refresh (update the default versions to be more up-to-date and remove some versions that are quite out of date) on OSC systems. While we encourage everyone to use up-to-date software, the old defaults will still be available till the next software refresh, in case some users prefer to use the old defaults. The software refresh is usually made during the scheduled downtime, while we will send out notifications to all users ahead of time for any questions/suggestions/concerns.

parallel-command-processor

There are many instances where it is necessary to run the same serial program many times with slightly different input. Parametric runs such as these either end up running in a sequential fashion in a single batch job, or a batch job is submitted for each parameter that is varied (or somewhere in between.) One alternative to this is to allocate a number of nodes/processors to running a large number of serial processes for some period of time. The command parallel-command-processor allows the execution of large number of independent serial processes in parallel.

GNU Compilers

Fortran, C and C++ compilers produced by the GNU Project. 

Availability and Restrictions

Versions

The GNU Compiler Collection (GCC) are available on all our clusters. These are the versions currently available:

Pages