The Ohio Supercomputer Center (OSC) is experiencing an email delivery problem with several types of messages from MyOSC. 

Technical Specifications

The following are technical specifications for Cardinal.  

Number of Nodes

378 nodes

Number of CPU Sockets

756 (2 sockets/node for all nodes)

Number of CPU Cores

39,312

Cores Per Node

104 cores/node for all nodes (96 usable)

Local Disk Space Per Node
  • 1.6 TB for compute nodes
  • 12.8 TB for GPU and Large mem nodes
  • 3.2 TB for login nodes
Compute, Large Mem & Login Node CPU Specifications
Intel Xeon CPU Max 9470 HBM2e (Sapphire Rapids)
  • 2.0 GHz
  • 52 cores per processor (48 usable)
GPU Node CPU Specifications
Intel Xeon Platinum 8470 (Sapphire Rapids)
  • 2.0 GHz
  • 52 cores per processor
Server Specifications
  • 326 Dell PowerEdge C6620
  • 32 Dell PowerEdge XE9640 (GPU nodes)
  • 20 Dell PowerEdge R660 (largemem & login nodes)
Accelerator Specifications

NVIDIA H100 (Hopper) GPUs each with 96 GB HBM2e memory and NVIDIA NVLINK

Number of Accelerator Nodes

32 quad GPU nodes (4 GPUs per node)

Total Memory

~281 TB (44 TB HBM, 237 TB DDR5)

Memory Per Node
  • 128 GB HBM / 512 GB DDR5 (compute nodes)
  • 1 TB (GPU nodes)
  • 128 GB HBM / 2 TB DDR5 (large mem nodes)
  • 128 GB HBM / 1 TB DDR5 (login nodes)
Memory Per Core
  • 1.2 GB HBM / 4.9 GB DDR5 (compute nodes)
  • 9.8 GB (GPU nodes)
  • 1.2 GB HBM / 19.7 GB DDR5 (large mem nodes)
  • 1.2 GB HBM / 9.8 GB DDR5 (login nodes)
Interconnect
  • NDR200 Infiniband (200 Gbps) (compute, large mem, login nodes)
  • 4x NDR400 Infiniband (400 Gbps x 4) with GPUDirect, allowing non-blocking communication between up to 10 nodes (GPU nodes)
Service: