On-demand H100 80GB SXM5
Deploy instances from $3.35 /h*
*$ 2.51 /h for a 2-year contract.
$ 3.35 /h for Pay As You Go instances
Experience the best available GPU for Machine Learning
At the forefront of digital intelligence
Our servers exclusively use the NVIDIA H100 SXM5 80GB NVLINK modules.
Via NVLINK, the H100 achieves a chip-to-chip interconnect bandwidth of 900GB/s and leverages a 3200gbit/s InfiniBand™ interconnect.
Why SXM5?
Unmatched performance and speed-
Interconnect bandwidth
Significantly higher interconnect bandwidth, up to 3200 gbit/s RDMA interconnects -
Memory bandwidth
SXM5 features faster HBM3 memory with 3 TB/s compared to PCIe’s HBM2e 2 TB/s -
Power efficiency
Higher power consumption (700W) compared to PCIe (350W), enabling more intensive computational tasks -
Optimized for ML
SXM5 is ideal for large-scale HPC workloads and AI model training on massive datasets
The H100 virtual dedicated servers are powered by:
Up to 8 NVidia® H100 80GB GPUs, each containing 16896 CUDA cores and 528 Tensor Cores.
This is the current flagship silicon from NVidia®, unbeaten in raw performance for AI operations.
We deploy the SXM5 NVLINK module, which offers a memory bandwidth of 2.6 Gbps and up to 900GB/s P2P bandwidth.
Fourth generation AMD Genoa, up to 384 threads with a boost clock of 3.7GHz
The name 8H100.80S.360V is composed as follows: 8x H100 SMX5, 360 CPU core threads & virtualized.
Instance name | GPU model | GPU | CPU | RAM | VRAM | P2P | On demand price | 6-month price | 2-year price |
---|---|---|---|---|---|---|---|---|---|
8H100.80S.176V | H100 SXM5 80GB | 8 | 176 | 1480 | 640 | 900 GB/s | $26.80/h | $25.73/h | $20.10/h |
4H100.80S.120V | H100 SXM5 80GB | 4 | 120 | 480 | 320 | 300 GB/s | $13.40/h | $12.86/h | $10.05/h |
4H100.80S.88V | H100 SXM5 80GB | 4 | 88 | 740 | 320 | 900 GB/s | $13.40/h | $12.86/h | $10.05/h |
2H100.80S.60V | H100 SXM5 80GB | 2 | 60 | 240 | 160 | 300 GB/s | $6.70/h | $6.43/h | $5.03/h |
1H100.80S.30V | H100 SXM5 80GB | 1 | 30 | 120 | 80 | / | $3.35/h | $3.22/h | $2.51/h |
DataCrunch Cloud
Where speed meets simplicity in GPU solutionsGPU clusters tailored to your needs
-
16x H100 Cluster
2x 8H100 bare metal systems
192C/384T AMD Genoa per node
1536GB DDR5 per node
8x 7.68TB local NVMe
3200gbit InfiniBand
100gbit ethernet
Uplink 5gbit/s
Storage
Up to 300 TBTier 112 GB/sUp to 2 PBTier 23 GB/sUp to 10 PBTier 31 GB/s -
32x H100 Cluster
4x 8H100 bare metal systems
192C/384T AMD Genoa per node
1536GB DDR5 per node
8x 7.68TB local NVMe
3200gbit InfiniBand
100gbit ethernet
Uplink 5gbit/s
Storage
Up to 300 TBTier 112 GB/sUp to 2 PBTier 23 GB/sUp to 10 PBTier 31 GB/s -
64x H100 Cluster
8x 8H100 bare metal systems
192C/384T AMD Genoa per node
1536GB DDR5 per node
8x 7.68TB local NVMe
3200gbit InfiniBand
100gbit ethernet
Uplink 10gbit/s
Storage
Up to 300 TBTier 112 GB/sUp to 2 PBTier 23 GB/sUp to 10 PBTier 31 GB/s -
128x H100 Cluster
16x 8H100 bare metal systems
192C/384T AMD Genoa per node
1536GB DDR5 per node
8x 7.68TB local NVMe
3200gbit InfiniBand
100gbit ethernet
Uplink 25gbit/s
Storage
Up to 300 TBTier 112 GB/sUp to 2 PBTier 23 GB/sUp to 10 PBTier 31 GB/s