On-demand NVIDIA® H200 141GB SXM5
Dynamic-price instances from 2.68/h*
NEW FEATURE
Dynamic pricing
Save up to 49% on GPU instances
This transparent pricing model adjusts daily based on market demand, providing a flexible and cost-effective alternative to fixed pricing for cloud GPU instances.
Users can optimize expenses by benefiting from lower prices during periods of low demand while enjoying predictable daily adjustments.
Pricing comparison
Experience the best available GPU for Machine Learning
At the forefront of digital intelligence
Our servers exclusively use the NVIDIA H200 SXM5 141GB NVLINK modules.
Via NVLINK, the H200 achieves a chip-to-chip interconnect bandwidth of 900GB/s and leverages a 3200gbit/s InfiniBand™ interconnect.
Why SXM5?
Unmatched performance and speed-
Interconnect bandwidth
Significantly higher interconnect bandwidth, up to 3200 gbit/s RDMA interconnects -
Memory bandwidth
SXM5 features faster HBM3 memory with 3 TB/s compared to PCIe’s HBM2e 2 TB/s -
Power efficiency
Higher power consumption (700W) compared to PCIe (350W), enabling more intensive computational tasks -
Optimized for ML
SXM5 is ideal for large-scale HPC workloads and AI model training on massive datasets
H200 virtual dedicated servers are powered by:
Up to 8 NVIDIA® H200 141GB GPUs, each containing 16896 CUDA cores and 528 Tensor Cores.
This is the current flagship silicon from NVidia®, unbeaten in raw performance for AI operations.
We only deploy the H200 in the SXM5 form-factor, which offers a memory bandwidth of 4.8TB/s and up to 900GB/s P2P bandwidth.
The servers utilize the fourth generation AMD Genoa, 384 threads with a boost clock of 3.7GHz, offering a substantial uplift in performance vs. alternatives.
The 8H200 VM has the same number of vCPUs as the 4H200, but delivers higher CPU performance by utilizing only the physical cores, rather than the logical processors (hyper-threads).
Instance name | GPU model | GPU | CPU | RAM | VRAM | P2P | On demand price | 6-month price | 2-year price |
---|---|---|---|---|---|---|---|---|---|
8H200.141S.176V | H200 SXM5 141GB | 8 | 176 | 1450 | 1128 | 900 GB/s | $28.72/h | $27.57/h | $21.54/h |
4H200.141S.176V | H200 SXM5 141GB | 4 | 176 | 740 | 564 | 900 GB/s | $14.36/h | $13.79/h | $10.77/h |
2H200.141S.88V | H200 SXM5 141GB | 2 | 88 | 370 | 282 | 900 GB/s | $7.18/h | $6.89/h | $5.38/h |
1H200.141S.44V | H200 SXM5 141GB | 1 | 44 | 185 | 141 | / | $3.59/h | $3.45/h | $2.69/h |
DataCrunch Cloud
Where speed meets simplicity in GPU solutionsGPU clusters tailored to your needs
-
16x H200 Cluster
2x 8H200 bare metal systems
192C/384T AMD Genoa per node
512GB RAM
7.68TB local NVMe
800gbit InfiniBand
100gbit ethernet
Uplink 5gbit/s
Storage
Up to 300 TBTier 112 GB/sUp to 2 PBTier 23 GB/sUp to 10 PBTier 31 GB/s -
32x H200 Cluster
4x 8H200 bare metal systems
192C/384T AMD Genoa per node
1536GB DDR5 per node
7.68TB local NVMe
1600gbit InfiniBand
100gbit ethernet
Uplink 5gbit/s
Storage
Up to 300 TBTier 112 GB/sUp to 2 PBTier 23 GB/sUp to 10 PBTier 31 GB/s -
64x H200 Cluster
8x 8H200 bare metal systems
192C/384T AMD Genoa per node
1536GB DDR5 per node
7.68TB local NVMe
3200gbit InfiniBand
100gbit ethernet
Uplink 10gbit/s
Storage
Up to 300 TBTier 112 GB/sUp to 2 PBTier 23 GB/sUp to 10 PBTier 31 GB/s -
128x H200 Cluster
16x 8H200 bare metal systems
192C/384T AMD Genoa per node
1536GB DDR5 per node
7.68TB local NVMe
3200gbit InfiniBand
100gbit ethernet
Uplink 25gbit/s
Storage
Up to 300 TBTier 112 GB/sUp to 2 PBTier 23 GB/sUp to 10 PBTier 31 GB/s