On-demand NVIDIA® H100 80GB SXM5
Dynamic-price instances from 2.57/h*
NEW FEATURE
Dynamic pricing
Save up to 49% on GPU instances
This transparent pricing model adjusts daily based on market demand, providing a flexible and cost-effective alternative to fixed pricing for cloud GPU instances.
Users can optimize expenses by benefiting from lower prices during periods of low demand while enjoying predictable daily adjustments.
Pricing comparison
Experience the best available GPU for Machine Learning
At the forefront of digital intelligence
Our servers exclusively use the NVIDIA H100 SXM5 80GB NVLINK modules.
Via NVLINK, the H100 achieves a chip-to-chip interconnect bandwidth of 900GB/s and leverages a 3200gbit/s InfiniBand™ interconnect.
Why SXM5?
Unmatched performance and speed-
Interconnect bandwidth
Significantly higher interconnect bandwidth, up to 3200 gbit/s RDMA interconnects -
Memory bandwidth
SXM5 features faster HBM3 memory with 3 TB/s compared to PCIe’s HBM2e 2 TB/s -
Power efficiency
Higher power consumption (700W) compared to PCIe (350W), enabling more intensive computational tasks -
Optimized for ML
SXM5 is ideal for large-scale HPC workloads and AI model training on massive datasets
H100 virtual dedicated servers are powered by:
Up to 8 NVIDIA® H100 80GB GPUs, each containing 16896 CUDA cores and 528 Tensor Cores.
'This is the current flagship silicon from NVIDIA, unbeaten in raw performance for AI operations.
We deploy the SXM5 NVLINK module, which offers a memory bandwidth of 2.6 Gbps and up to 900GB/s P2P bandwidth.
Fourth generation AMD Genoa, up to 384 threads with a boost clock of 3.7GHz.
The name 8H100.80S.176V is composed as follows: 8x H100 SMX5, 176 CPU core threads & virtualized.
The 8H100 VM has the same number of vCPUs as the 4H100, but delivers higher CPU performance by utilizing only the physical cores, rather than the logical processors (hyper-threads).
Instance name | GPU model | GPU | CPU | RAM | VRAM | P2P | On demand price | 6-month price | 2-year price |
---|---|---|---|---|---|---|---|---|---|
8H100.80S.176V | H100 SXM5 80GB | 8 | 176 | 1480 | 640 | 900 GB/s | $21.20/h | $20.35/h | $15.90/h |
4H100.80S.176V | H100 SXM5 80GB | 4 | 176 | 740 | 320 | 900 GB/s | $10.60/h | $10.18/h | $7.95/h |
2H100.80S.80V | H100 SXM5 80GB | 2 | 80 | 370 | 160 | 900 GB/s | $5.30/h | $5.09/h | $3.97/h |
1H100.80S.32V | H100 SXM5 80GB | 1 | 32 | 185 | 80 | / | $2.65/h | $2.54/h | $1.99/h |
1H100.80S.30V | H100 SXM5 80GB | 1 | 30 | 120 | 80 | / | $2.65/h | $2.54/h | $1.99/h |
DataCrunch Cloud
Where speed meets simplicity in GPU solutionsGPU clusters tailored to your needs
-
16x H100 Cluster
2x 8H100 bare metal systems
192C/384T AMD Genoa per node
1536GB DDR5 per node
8x 7.68TB local NVMe
3200gbit InfiniBand
100gbit ethernet
Uplink 5gbit/s
Storage
Up to 300 TBTier 112 GB/sUp to 2 PBTier 23 GB/sUp to 10 PBTier 31 GB/s -
32x H100 Cluster
4x 8H100 bare metal systems
192C/384T AMD Genoa per node
1536GB DDR5 per node
8x 7.68TB local NVMe
3200gbit InfiniBand
100gbit ethernet
Uplink 5gbit/s
Storage
Up to 300 TBTier 112 GB/sUp to 2 PBTier 23 GB/sUp to 10 PBTier 31 GB/s -
64x H100 Cluster
8x 8H100 bare metal systems
192C/384T AMD Genoa per node
1536GB DDR5 per node
8x 7.68TB local NVMe
3200gbit InfiniBand
100gbit ethernet
Uplink 10gbit/s
Storage
Up to 300 TBTier 112 GB/sUp to 2 PBTier 23 GB/sUp to 10 PBTier 31 GB/s -
128x H100 Cluster
16x 8H100 bare metal systems
192C/384T AMD Genoa per node
1536GB DDR5 per node
8x 7.68TB local NVMe
3200gbit InfiniBand
100gbit ethernet
Uplink 25gbit/s
Storage
Up to 300 TBTier 112 GB/sUp to 2 PBTier 23 GB/sUp to 10 PBTier 31 GB/s