
NVIDIA® HGX B200
GPU Instances and Clusters
Early and instant access to the Blackwell GPUs starting at $3.68/h*
HGX B200 with DataCrunch
Where flexibility meets performance and simplicityOn-demand instances
On-demand clusters
Bare-metal clusters
HGX B200 Pricing
The fastest access to HGX B200 GPUs with reliable service and expert support-
$4.90/h
On-demand dynamic pricing -
$3.68/h
2-year contract -
$1.08/h
On-demand spot instance

HGX B200 Specs
Designed for the most demanding AI and HPC workloads-
15x
Faster real-time LLM inference -
3x
Faster training performance -
12x
Lower energy use and TCO

B200 virtual dedicated servers are powered by:
Up to 8 on-demand NVIDIA® HGX B200 180GB GPUs, using Blackwell Tensor Core technology combined with TensorRT-LLM and NVIDIA NeMo framework innovations.
This is the latest hardware from NVIDIA available on the market and purpose-built to accelerate inference for LLMs and mixture-of-experts (MoE) models.
We deploy the HGX platform with NVLink interconnect, which offers a memory bandwidth of up to 62TB/s and up to 1.8TB/s P2P bandwidth.
Sixth-generation Intel Xeon with up to 144 threads and a clock boost of up to 3.9GHz.
Instance name | GPU model | GPU | CPU | RAM | VRAM | P2P | On demand price | 6-month price | 2-year price |
---|---|---|---|---|---|---|---|---|---|
8B200.248V | B200 SXM5 180GB | 8 | 248 | 2000 | 1440 | 1.8 TB/s | $39.20/h | $37.63/h | $29.40/h |
4B200.124V | B200 SXM5 180GB | 4 | 124 | 1000 | 720 | 1.8 TB/s | $19.60/h | $18.82/h | $14.70/h |
2B200.62V | B200 SXM5 180GB | 2 | 62 | 500 | 360 | 1.8 TB/s | $9.80/h | $9.41/h | $7.35/h |
1B200.31V | B200 SXM5 180GB | 1 | 31 | 250 | 180 | / | $4.90/h | $4.70/h | $3.68/h |
DataCrunch instances
Where speed meets simplicity in GPU solutionsCustomer feedback
What they say about us...
-
Having direct contact between our engineering teams enables us to move incredibly fast. Being able to deploy any model at scale is exactly what we need in this fast moving industry. DataCrunch enables us to deploy custom models quickly and effortlessly.
Iván de Prado Head of AI at Freepik -
From deployment to training, our entire language model journey was powered by DataCrunch's clusters. Their high-performance servers and storage solutions allowed us to run smooth operations and maximum uptime, and to to focus on achieving exceptional results without worrying about hardware issues.
José Pombal AI Research Scientist at Unbabel -
DataCrunch powers our entire monitoring and security infrastructure with exceptional reliability. We also enforce firewall restrictions to protect against unauthorized access. Thanks to DataCrunch, our training clusters run smoothly and securely.
Nicola Sosio ML Engineer at Prem AI
GPU clusters tailored to your needs
-
16x B200 Cluster
2x 8B200 bare metal systems
144C/288T per node
1.5TB RAM
7.68TB local NVMe
800gbit InfiniBand
100gbit ethernet
Uplink 5gbit/s
Storage
Up to 300 TBTier 112 GB/sUp to 2 PBTier 23 GB/sUp to 10 PBTier 31 GB/s -
32x B200 Cluster
4x 8B200 bare metal systems
144C/288T per node
1.5TB RAM
7.68TB local NVMe
800gbit InfiniBand
100gbit ethernet
Uplink 5gbit/s
Storage
Up to 300 TBTier 112 GB/sUp to 2 PBTier 23 GB/sUp to 10 PBTier 31 GB/s -
64x B200 Cluster
8x 8B200 bare metal systems
144C/288T per node
1.5TB RAM
7.68TB local NVMe
800gbit InfiniBand
100gbit ethernet
Uplink 10gbit/s
Storage
Up to 300 TBTier 112 GB/sUp to 2 PBTier 23 GB/sUp to 10 PBTier 31 GB/s -
128x B200 Cluster
16x 8B200 bare metal systems
144C/288T per node
1.5TB RAM
7.68TB local NVMe
800gbit InfiniBand
100gbit ethernet
Uplink 25gbit/s
Storage
Up to 300 TBTier 112 GB/sUp to 2 PBTier 23 GB/sUp to 10 PBTier 31 GB/s