The Ohio Supercomputer Center (OSC) is experiencing an email delivery problem with several types of messages from MyOSC. 

 OSC is preparing to update Slurm on its production systems to version 23.11.4 on March, 27. 

Migrating jobs from other clusters

This page includes a summary of differences to keep in mind when migrating jobs from other clusters to Ascend. 

Guidance for Pitzer Users

Hardware Specifications

  Ascend (PER NODE) Pitzer (PER NODE)
Regular compute node n/a

40 cores and 192GB of RAM

48 cores and 192GB of RAM

Huge memory node

n/a

48 cores and 768GB of RAM

(12 nodes in this class)

80 cores and 3.0 TB of RAM

(4 nodes in this class)

GPU Node

88 cores and 921GB RAM

4 GPUs per node

(24 nodes in this class)

40 cores and 192GB of RAM, 2 GPUs per node

(32 nodes in this class)

48 cores and 192GB of RAM, 2 GPUs per node

(42 nodes in this class)

Guidance for Owens Users

Hardware Specifications

  Ascend (PER NODE) Owens (PER NODE)
Regular compute node n/a 28 cores and 125GB of RAM
Huge memory node n/a

48 cores and 1.5TB of RAM

(16 nodes in this class)

GPU node

88 cores and 921GB RAM

4 GPUs per node

(24 nodes in this class)

28 cores and 125GB of RAM, 1 GPU per node

(160 nodes in this class)

File Systems

Ascend accesses the same OSC mass storage environment as our other clusters. Therefore, users have the same home directory, project space, and scratch space as on the other clusters.

Software Environment

Ascend uses the same module system as other OSC Clusters.

Use   module load <package to add a software package to your environment. Use   module list   to see what modules are currently loaded and  module avail   to see the modules that are available to load. To search for modules that may not be visible due to dependencies or conflicts, use   module spider 

You can keep up to on the software packages that have been made available on Ascend by viewing the Software by System page and selecting the Ascend system.

Programming Environment

C, C++ and Fortran are supported on the Ascend cluster. Intel, oneAPI, GNU, nvhpc, and aocc compiler suites are available. The Intel development tool chain is loaded by default. To switch to a different compiler, use  module swap . Ascend also uses the MVAPICH2 implementation of the Message Passing Interface (MPI).

See the Ascend Programming Environment page for details. 

Supercomputer: 
Service: