NVIDIA® Blackwell Clusters: Available Soon
B200
Sign up and deploy in less than 1 minute

NVIDIA® HGX B200

GPU Instances and Clusters

Early and instant access to the Blackwell GPUs starting at $3.68/h*

Deploy now
Partners who trust our services:
  • Freepik
  • Black Forest
  • 1X
  • ManifestAI
  • Nex
  • Sony
  • Harward University
  • NEC
  • Korea University
  • MIT University
  • Findable
  • Freepik
  • Black Forest
  • 1X
  • ManifestAI
  • Nex
  • Sony
  • Harward University
  • NEC
  • Korea University
  • MIT University
  • Findable

HGX B200 with DataCrunch

Where flexibility meets performance and simplicity

On-demand instances

Instant access to 1x to 2x, 4x, and 8x B200 GPUs
Deploy now

On-demand clusters

Self-serve access to multi-node clusters of 16x and more B200 GPUs
COMING SOON

Bare-metal clusters

Multi-node clusters pre-configured to your needs

HGX B200 Pricing

The fastest access to HGX B200 GPUs with reliable service and expert support
  • $4.90/h

    On-demand dynamic pricing
  • $3.68/h

    2-year contract
  • $1.08/h

    On-demand spot instance
All prices for 1x HGX B200 VMs configured by DataCrunch for AI workloads. Pricing
Deploy now
Need more than 8x instances? Order GPU-clusters pre-configured to your needs.

HGX B200 Specs

Designed for the most demanding AI and HPC workloads
  • 15x

    Faster real-time LLM inference
  • 3x

    Faster training performance
  • 12x

    Lower energy use and TCO
Compared to NVIDIA HGX H100
B200

B200 virtual dedicated servers are powered by:

Up to 8 on-demand NVIDIA® HGX B200 180GB GPUs, using Blackwell Tensor Core technology combined with TensorRT-LLM and NVIDIA NeMo framework innovations.

This is the latest hardware from NVIDIA available on the market and purpose-built to accelerate inference for LLMs and mixture-of-experts (MoE) models.

We deploy the HGX platform with NVLink interconnect, which offers a memory bandwidth of up to 62TB/s and up to 1.8TB/s P2P bandwidth.

Sixth-generation Intel Xeon with up to 144 threads and a clock boost of up to 3.9GHz.

Instance name GPU model GPU CPU RAM VRAM P2P On demand price 6-month price 2-year price
8B200.248V B200 SXM5 180GB 8 248 2000 1440 1.8 TB/s $39.20/h $37.63/h $29.40/h
4B200.124V B200 SXM5 180GB 4 124 1000 720 1.8 TB/s $19.60/h $18.82/h $14.70/h
2B200.62V B200 SXM5 180GB 2 62 500 360 1.8 TB/s $9.80/h $9.41/h $7.35/h
1B200.31V B200 SXM5 180GB 1 31 250 180 / $4.90/h $4.70/h $3.68/h
*Note: Price and discount are fixed after deployment, except for dynamically-priced instances, which update daily.

DataCrunch instances

Where speed meets simplicity in GPU solutions
Fast Dedicated hardware for max speed and security
Productive Start, stop, hibernate instantly via the dashboard or API
Expert support Engineers are available via chat on the dashboard
Protected DataCrunch is ISO27001 certified and GDPR compliant

Customer feedback

What they say about us...

  • Quote

    Having direct contact between our engineering teams enables us to move incredibly fast. Being able to deploy any model at scale is exactly what we need in this fast moving industry. DataCrunch enables us to deploy custom models quickly and effortlessly.

    Iván de Prado Head of AI at Freepik
    Logo
  • Quote

    From deployment to training, our entire language model journey was powered by DataCrunch's clusters. Their high-performance servers and storage solutions allowed us to run smooth operations and maximum uptime, and to to focus on achieving exceptional results without worrying about hardware issues.

    José Pombal AI Research Scientist at Unbabel
    Logo
  • Quote

    DataCrunch powers our entire monitoring and security infrastructure with exceptional reliability. We also enforce firewall restrictions to protect against unauthorized access. Thanks to DataCrunch, our training clusters run smoothly and securely.

    Nicola Sosio ML Engineer at Prem AI
    Logo
Deploy now Or check out our docs
Reserve your perfect setup today

GPU clusters tailored to your needs

  • 16x B200 Cluster

    2x 8B200 bare metal systems

    144C/288T per node

    1.5TB RAM

    7.68TB local NVMe

    800gbit InfiniBand

    100gbit ethernet

    Uplink 5gbit/s


    Storage
    Up to 300 TB
    Tier 1
    12 GB/s
    Up to 2 PB
    Tier 2
    3 GB/s
    Up to 10 PB
    Tier 3
    1 GB/s
  • 32x B200 Cluster

    4x 8B200 bare metal systems

    144C/288T per node

    1.5TB RAM

    7.68TB local NVMe

    800gbit InfiniBand

    100gbit ethernet

    Uplink 5gbit/s


    Storage
    Up to 300 TB
    Tier 1
    12 GB/s
    Up to 2 PB
    Tier 2
    3 GB/s
    Up to 10 PB
    Tier 3
    1 GB/s
  • 64x B200 Cluster

    8x 8B200 bare metal systems

    144C/288T per node

    1.5TB RAM

    7.68TB local NVMe

    800gbit InfiniBand

    100gbit ethernet

    Uplink 10gbit/s


    Storage
    Up to 300 TB
    Tier 1
    12 GB/s
    Up to 2 PB
    Tier 2
    3 GB/s
    Up to 10 PB
    Tier 3
    1 GB/s
  • 128x B200 Cluster

    16x 8B200 bare metal systems

    144C/288T per node

    1.5TB RAM

    7.68TB local NVMe

    800gbit InfiniBand

    100gbit ethernet

    Uplink 25gbit/s


    Storage
    Up to 300 TB
    Tier 1
    12 GB/s
    Up to 2 PB
    Tier 2
    3 GB/s
    Up to 10 PB
    Tier 3
    1 GB/s
Looking for something different? Contact us