Nvidia Grace Hopper Superchip CG1 model 12V 144GB HBM3e - 900-2G530-0070-000
Cores: 72 Threads: 72 L3 Cache: 117 MB TDP: 450W-1000W
Interested in purchasing more units? Request an individual B2B offer for these products.
Product code | 204.172329 |
---|---|
Part number | 900-2G530-0070-000 |
Supermicro Part No. | GPU-NVGH480-144-12V |
Manufacturer | NVIDIA |
Availability | Not in stock |
Warranty | 24 months |
Weight | 0.2 kg |
The price includes all legal fees |
Detailed information
NVIDIA GH200 NVL2 Grace Hopper Superchip
The breakthrough accelerated CPU for large-scale AI and high-performance computing (HPC) applications.
-
72-core NVIDIA Grace CPU
-
NVIDIA H100 Tensor Core GPU
-
Up to 480GB of LPDDR5X memory with error-correction code (ECC)
-
Supports 96GB of HBM3 or 144GB of HBM3e
-
Up to 624GB of fast-access memory
-
NVLink-C2C: 900GB/s of coherent memory
Power and Efficiency With the Grace CPU
The NVIDIA Grace CPU delivers 2X the performance per watt of conventional x86-64 platforms and is the world’s fastest Arm® data center CPU. The Grace CPU was designed for high single-threaded performance, high-memory bandwidth, and outstanding data-movement capabilities. The NVIDIA Grace CPU combines 72 Neoverse V2 Armv9 cores with up to 480GB of server-class LPDDR5X memory with ECC. This design strikes the optimal balance of bandwidth, energy efficiency, capacity, and cost. Compared to an eight-channel DDR5 design, the Grace CPU LPDDR5X memory subsystem provides up to 53 percent more bandwidth at one- eighth the power per gigabyte per second.
The Power of Coherent Memory
NVLink-C2C memory coherency increases developer productivity, performance, and the amount of GPU-accessible memory. CPU and GPU threads can concurrently and transparently access both CPU and GPU resident memory, allowing developers to focus on algorithms instead of explicit memory management. Memory coherency lets developers only transfer the data they need and not migrate entire pages to and from the GPU. It also provides lightweight synchronization primitives across GPU and CPU threads by enabling native atomics from both the CPU and GPU. Fourth-generation NVLink allows accessing peer memory with direct loads, stores, and atomic operations, so accelerated applications can solve larger problems more easily than ever.
Class-Leading Performance for HPC and AI Workloads
The GH200 Grace Hopper Superchip is the first true heterogeneous accelerated platform for HPC workloads. It accelerates any application with the strengths of both GPUs and CPUs while providing the simplest and most productive heterogeneous programming model to date, enabling scientists and engineers to focus on solving the world’s most important problems. For AI inference workloads, GH200 Grace Hopper Superchips combine with NVIDIA networking technologies to provide the best TCO for scale-out solutions, letting customers take on larger datasets, more complex models, and new workloads using up to 624GB of fast-access memory. For AI training, up to 256 NVLink-connected GPUs can access up to 144TB of memory at high bandwidth for large language model (LLM) or recommender system training.
Grace Hopper HPC Performance
Parameters
HBM size | 144 |
---|---|
Socket | SoC Nvidia |
Product line | Grace |
Generation | Hopper |
Core count | 72 |
Frekvence CPU (GHz) | 3.10 |
Turbo frequency | 3.10 |
Cache | 114 |
TDP (W) | 450-1000 |
Processor series | TypProc Hopper |