Available October 2024 NVIDIA® H200 clusters
GB200
Available soon

NVIDIA® GB200

Partners who trust our services:
  • Freepik
  • Black Forest
  • ManifestAI
  • Nex
  • Sony
  • Harward University
  • NEC
  • Korea University
  • MIT University
  • Findable
  • Freepik
  • Black Forest
  • ManifestAI
  • Nex
  • Sony
  • Harward University
  • NEC
  • Korea University
  • MIT University
  • Findable
GB200

Unleashing the next wave of AI and computing power

Supercharge your machine learning capacity

The GB200 Grace-Blackwell superchip features two Blackwell B200 GPUs and one Grace CPU, an enhancement from the previous generation's configuration of a single Hopper GPU and one Grace CPU. These components are interconnected via NVLINK, facilitating a unified memory domain. Each GPU is equipped with 192GB of HBM3e memory, while the CPU is connected to 512GB of LPDDR5 memory through 16 memory channels. This results in a total of 896GB of unified memory, accessible to all three devices.

The GB200 can scale up to 512 GPUs within a single NVLINK domain. The primary building block for this configuration is the NVL72, which integrates 72 GPUs. This architecture allows for substantial scalability and performance.

Unlike an InfiniBand cluster, the GB200 utilizes NVLINK for GPU interconnectivity. This provides significantly higher throughput and lower latency for GPU-to-GPU communication, optimizing performance for demanding computational tasks.

Why GB200?

Up to 30x more powerful than previous models

The GB200 NVL72: a new era in AI supercomputing

The GB200 NVL72 combines 36 Grace-Blackwell superchips into a single cluster of 72 GPUs, fully connected via NVLINK. This setup significantly improves over previous generations by enabling 72 GPUs to communicate directly over NVLINK instead of InfiniBand.

In earlier clusters, such as those with H100/H200 GPUs, the configuration was limited to 8 GPUs per 2 CPUs. For instance, a cluster with 64 H100 GPUs required one ConnectX-7 InfiniBand switch, 8 HGX H100 servers each containing 8 GPUs, and the necessary cabling. Users typically relied on NCCL and/or MPI to manage compute workloads over the InfiniBand network, with each H100 GPU capable of connecting to any other H100's memory at 400GB/s.

With the introduction of the GH200 and now the GB200, GPU communication has advanced to using NVLINK over larger domains. The GB200 NVL72 connects 72 GPUs within a single NVLINK domain, allowing any B200 GPU to communicate with any other B200 at 1800GB/s. This high-speed connectivity is crucial for training and inferring very large models that exceed the memory capacity of 8x H200 or 8x B200 setups.

GB200-NVL72

DataCrunch Cloud

Where speed meets simplicity in GPU solutions
Fast Dedicated hardware for max speed and security
Productive Start, stop, hibernate instantly via the dashboard or API
Expert support Engineers are available via chat on the dashboard
Protected DataCrunch is ISO27001 certified
Reserve your perfect setup today

GPU clusters tailored to your needs

  • 1x GB200-NVL72 Cluster

    1x GB200 NVL72 cluster

    18 nodes, each containing:

    2x Grace CPUs

    4x Blackwell B200 GPUs

    1.8TB/s interconnect over the full domain


    Storage
    Up to 300 TB
    Tier 1
    12 GB/s
    Up to 2 PB
    Tier 2
    3 GB/s
    Up to 10 PB
    Tier 3
    1 GB/s
  • n x GB200-NVL72 Cluster

    n x GB200 NVL72 cluster

    n x 18 nodes, each containing:

    2x Grace CPUs

    4x Blackwell B200 GPUs

    1.8TB/s interconnect over the full domain

    400-800gbit per GPU InfiniBand interconnect between the NVL72 clusters


    Storage
    Up to 300 TB
    Tier 1
    12 GB/s
    Up to 2 PB
    Tier 2
    3 GB/s
    Up to 10 PB
    Tier 3
    1 GB/s
Looking for something different? Contact us