Detailed system specifications:
378 Dell Nodes, 39,312 total cores, 128 GPUs
Dense Compute: 326 Dell PowerEdge C6620 two-socket servers, each with:
2 Intel Xeon CPU Max 9470 (Sapphire Rapids, 52 cores [48 usable], 2.0 GHz) processors
128 GB HBM2e and 512 GB DDR5 memory
1.6 TB NVMe local storage
NDR200 Infiniband
GPU Compute: 32 Dell PowerEdge XE9640 two-socket servers, each with:
2 Intel Xeon Platinum 8470 (Sapphire Rapids, 52 cores [48 usable], 2.0 GHz) processors
1 TB DDR5 memory
4 NVIDIA H100 (Hopper) GPUs each with 94 GB HBM2e memory and NVIDIA NVLink
12.8 TB NVMe local storage
Four NDR400 Infiniband HCAs supporting GPUDirect
Analytics: 16 Dell PowerEdge R660 two-socket servers, each with:
2 Intel Xeon CPU Max 9470 (Sapphire Rapids, 52 cores [48 usable], 2.0 GHz) processors
128 GB HBM2e and 2 TB DDR5 memory
12.8 TB NVMe local storage
NDR200 Infiniband
Login nodes: 4 Dell PowerEdge R660 two-socket servers, each with:
2 Intel Xeon CPU Max 9470 (Sapphire Rapids, 52 cores [48 usable], 2.0 GHz) processors
128 GB HBM and 1 TB DDR5 memory
3.2 TB NVMe local storage
NDR200 Infiniband
IP address: TBD
~10.5 PF Theoretical system peak performance
~8 PetaFLOPs (GPU)
~2.5 PetaFLOPS (CPU)
9 Physical racks, plus Two Coolant Distribution Units (CDUs) providing direct-to-the-chip liquid cooling for all nodes
The following are technical specifications for Cardinal.
378 nodes
756 (2 sockets/node for all nodes)
39,312
104 cores/node for all nodes (96 usable)
NVIDIA H100 (Hopper) GPUs each with 96 GB HBM2e memory and NVIDIA NVLINK
32 quad GPU nodes (4 GPUs per node)
~281 TB (44 TB HBM, 237 TB DDR5)