TIP: Remember to check the menu to the right of the page for related pages with more information about Owens' specifics.

OSC's Owens cluster being installed in 2016 is a Dell-built, Intel® Xeon® processor-based supercomputer.

Owens infographic,



Detailed system specifications:

  • 824 Dell Nodes
  • Dense Compute
    • 648 compute nodes (Dell PowerEdge C6320 two-socket servers with Intel Xeon E5-2680 v4 (Broadwell, 14 cores, 2.40GHz) processors, 128GB memory)

  • GPU Compute

    • 1 60 ‘GPU ready’ compute nodes -- Dell PowerEdge R730 two-socket servers with Intel Xeon E5-2680 v4 (Broadwell, 14 cores, 2.40GHz) processors, 128GB memory

    • NVIDIA Tesla P100 (Pascal) GPUs -- 5.3TF peak (double precision), 16GB memory

  • Analytics

    • 16 huge memory nodes (Dell PowerEdge R930 four-socket server with Intel Xeon E5-4830 v3 (Haswell 12 core, 2.10GHz) processors, 1,536GB memory, 12 x 2TB drives)

  • 23,392 total cores
    • 28 cores/node  & 128GB of memory/node
  • Mellanox EDR (100Gbps) Infiniband networking
  • Theoretical system peak performance
    • ~750 teraflops (CPU only)
  • 4 login nodes:
    • Intel Xeon E5-2680 (Broadwell) CPUs
    • 28 cores/node and 256GB of memory/node

How to Connect

  • SSH Method

To login to Owens at OSC, ssh to the following hostname:


You can either use an ssh client application or execute ssh on the command line in a terminal window as follows:

ssh <username>@owens.osc.edu

You may see warning message including SSH key fingerprint. Verify that the fingerprint in the message matches one of the SSH key fingerprint listed here, then type yes.

From there, you are connected to Owens login node and have access to the compilers and other software development tools. You can run programs interactively or through batch requests. We use control groups on login nodes to keep the login nodes stable. Please use batch jobs for any compute-intensive or memory-intensive work. See the following sections for details.

  • OnDemand Method

You can also login to Owens at OSC with our OnDemand tool. The first step is to login to OnDemand. Then once logged in you can access Owens by clicking on "Clusters", and then selecting ">_Owens Shell Access".

Instructions on how to connect to OnDemand can be found at the OnDemand documention page.

File Systems

Owens accesses the same OSC mass storage environment as our other clusters. Therefore, users have the same home directory as on the old clusters. Full details of the storage environment are available in our storage environment guide.

Software Environment

The module system is used to manage the software environment on owens. Use  module load <package>  to add a software package to your environment. Use  module list  to see what modules are currently loaded and  module avail  to see the modules that are available to load. To search for modules that may not be visible due to dependencies or conflicts, use  module spider . By default, you will have the batch scheduling software modules, the Intel compiler and an appropriate version of mvapich2 loaded.

You can keep up to on the software packages that have been made available on Owens by viewing the Software by System page and selecting the Owens system.

Compiling Code to Use Advanced Vector Extensions (AVX2)

The Haswell and Broadwell processors that make up Owens support the Advanced Vector Extensions (AVX2) instruction set, but you must set the correct compiler flags to take advantage of it. AVX2 has the potential to speed up your code by a factor of 4 or more, depending on the compiler and options you would otherwise use.

In our experience, the Intel and PGI compilers do a much better job than the gnu compilers at optimizing HPC code.

With the Intel compilers, use -xHost and -O2 or higher. With the gnu compilers, use -march=native and -O3 . The PGI compilers by default use the highest available instruction set, so no additional flags are necessary.

This advice assumes that you are building and running your code on Owens. The executables will not be portable.  Of course, any highly optimized builds, such as those employing the options above, should be thoroughly validated for correctness.

See the Owens Programming Environment page for details.

Batch Specifics

Refer to the documentation for our batch environment to understand how to use the batch system on OSC hardware. Some specifics you will need to know to create well-formed batch scripts:

  • Most compute nodes on Owens have 28 cores/processors per node.  Huge-memory (analytics) nodes have 48 cores/processors per node.
  • Jobs on Owens may request partial nodes.

Using OSC Resources

For more information about how to use OSC resources, please see our guide on batch processing at OSC. For specific information about modules and file storage, please see the Batch Execution Environment page.


Owens Known Issues (unresolved)

Title Category Description Posted Updated
A partial-node MPI job failed to start using Intel MPI mpiexec Owens, Pitzer, Software

A partial-node MPI job may fail to start using mpiexec from intelmpi/2019.3 and intelmpi/2019.7 with error messages like

[mpiexec@o0439.ten.osc.... Read more          
1 year 3 months ago 2 months 1 week ago
Cannot use mpiexec/mpirun from OpenMPI in an interactive session Owens, Pitzer, Software

We found mpiexec/mpirun from OpenMPI can not be used in an interactive session (launched by sinteractive) after upgrading Pitzer and... Read more

10 months 3 weeks ago 2 months 1 week ago
Singularity: reached your pull rate limit Owens, Pitzer, Software

You might encounter an error while pulling a large Docker image:

ERROR: toomanyrequests: Too Many Requests.


You have reached your pull rate limit. You may... Read more          
7 months 2 weeks ago 7 months 2 weeks ago
cuda-gdb segmentation fault on startup Owens, Pitzer, Software

The CUDA debugger, cuda-gdb, can raise a segmentation fault immediately upon execution.  A workaround before executing cuda-gdb is to unload the xalt module, e.g.: 

module unload... Read more          
1 year 9 months ago 1 year 9 months ago
Jobs reports 'excessive memory usage' message Owens, Pitzer

... Read more

1 year 10 months ago 1 year 9 months ago
Error 'libim_client.so: undefined reference to uuid@' with MVAPICH2 in Conda environment Owens, Pitzer, Software

Users may encoutner an error like 'libim_client.so: undefined reference to `uuid_unparse@UUID_1.0' while compiling MPI applications with mvapich2 in some Conda enivronments. We found pre-installed... Read more

1 year 10 months ago 1 year 10 months ago
Incorrect MPI launcher and compiler wrappers with Conda environments python/2.7-conda5.2 and python/3.6-conda5.2 Owens, Pitzer, Ruby, Software

Users may encounter under-performing MPI jobs or failures of compiling MPI applications if you are using Conda from system. We found pre-installed mpich2 package in some Conda environments ... Read more

1 year 10 months ago 1 year 10 months ago
Large MPI job startup hang with mvapich2/2.3 and mvapich2/2.3.1 Owens, Pitzer, Software

We have found that large MPI jobs may hang at startup with mvapich2/2.3 and mvapich/2.3.1 (on any compiler dependency) due to a known bug that has been fixed in release 2... Read more

2 years 2 months ago 2 years 2 months ago
Gaussian g16b01 G4 problem Owens, Pitzer, Software

Gaussian-4 (G4) theory calculations in Gaussian 16 Rev. B.01 can produce erratic results.  A workaround is to use Gaussian 16 Rev. A.03 or Rev. C.01, e.g.: 

module load gaussian/... Read more          
2 years 12 months ago 2 years 4 months ago
Error when downloading SRA data on computing nodes Owens, Pitzer, Software

NCBI blocks any connection from computing nodes because they are behind firewalls. Thus OSC users cannot use SRA tools to download data "on-the-fly" at runtime on computing nodes, e.g. 'fastq-dump... Read more

2 years 6 months ago 2 years 6 months ago


Owens Changelog