OSC's Ascend cluster was installed in fall 2022 and is a Dell-built, AMD EPYC™ CPUs with NVIDIA A100 80GB GPUs cluster devoted entirely to intensive GPU processing.
Detailed system specifications:
- 24 Power Edge XE 8545 nodes, each with:
- 2 AMD EPYC 7643 (Milan) processors (2.3 GHz, each with 44 usable cores)
- 4 NVIDIA A100 GPUs with 80GB memory each, supercharged by NVIDIA NVLink
- 921GB usable RAM
- 12.8TB NVMe internal storage
- 2,112 total usable cores
- 88 cores/node & 921GB of memory/node
- Mellanox/NVIDA 200 Gbps HDR InfiniBand
- Theoretical system peak performance
- 1.95 petaflops
- 2 login nodes
- IP address: 192.148.247.[180-181]
How to Connect
To login to Ascend at OSC, ssh to the following hostname:
You can either use an ssh client application or execute ssh on the command line in a terminal window as follows:
You may see a warning message including SSH key fingerprint. Verify that the fingerprint in the message matches one of the SSH key fingerprints listed here, then type yes.
From there, you are connected to the Ascend login node and have access to the compilers and other software development tools. You can run programs interactively or through batch requests. We use control groups on login nodes to keep the login nodes stable. Please use batch jobs for any compute-intensive or memory-intensive work. See the following sections for details.
You can also login to Ascend at OSC with our OnDemand tool. The first step is to log into OnDemand. Then once logged in you can access Ascend by clicking on "Clusters", and then selecting ">_Ascend Shell Access".
Instructions on how to connect to OnDemand can be found at the OnDemand documentation page.
Ascend accesses the same OSC mass storage environment as our other clusters. Therefore, users have the same home directory as on the old clusters. Full details of the storage environment are available in our storage environment guide.
The module system on Ascend is the same as on the Owens and Pitzer systems. Use
module load <package> to add a software package to your environment. Use
module list to see what modules are currently loaded and
module avail to see the modules that are available to load. To search for modules that may not be visible due to dependencies or conflicts, use
module spider .
You can keep up to the software packages that have been made available on Ascend by viewing the Software by System page and selecting the Ascend system.
Refer to this Slurm migration page to understand how to use Slurm on the Ascend cluster.
Using OSC Resources
For more information about how to use OSC resources, please see our guide on batch processing at OSC. For specific information about modules and file storage, please see the Batch Execution Environment page.