![H100-SXM5](/_image?href=%2Fassets%2FH100-SXM5.DXBtJnfA.png&w=1346&f=webp)
On-demand H100 80GB SXM5
Deploy instances from $3.17/h*
*$2.38/h for a 2-year contract.
$3.17/h for Pay As You Go instances
![H100-SXM5](/_image?href=%2Fassets%2FH100-2.BzF13_0_.png&f=webp)
Experience the best available GPU for Machine Learning
At the forefront of digital intelligence
Our servers exclusively use the NVIDIA H100 SXM5 80GB NVLINK modules.
Via NVLINK, the H100 achieves a chip-to-chip interconnect bandwidth of 900GB/s and leverages a 3200gbit/s InfiniBand™ interconnect.
Why SXM5?
Unmatched performance and speed-
Interconnect bandwidth
Significantly higher interconnect bandwidth, up to 3200 gbit/s RDMA interconnects -
Memory bandwidth
SXM5 features faster HBM3 memory with 3 TB/s compared to PCIe’s HBM2e 2 TB/s -
Power efficiency
Higher power consumption (700W) compared to PCIe (350W), enabling more intensive computational tasks -
Optimized for ML
SXM5 is ideal for large-scale HPC workloads and AI model training on massive datasets
![H100](/_image?href=%2Fassets%2FH100.5l2mxYGE.png&w=600&f=webp)
The H100 virtual dedicated servers are powered by:
Up to 8 NVidia® H100 80GB GPUs, each containing 16896 CUDA cores and 528 Tensor Cores.
This is the current flagship silicon from NVidia®, unbeaten in raw performance for AI operations.
We deploy the SXM5 NVLINK module, which offers a memory bandwidth of 2.6 Gbps and up to 900GB/s P2P bandwidth.
Fourth generation AMD Genoa, up to 384 threads with a boost clock of 3.7GHz
The name 8H100.80S.360V is composed as follows: 8x H100 SMX5, 360 CPU core threads & virtualized.
Instance name | GPU model | GPU | CPU | RAM | VRAM | P2P | On demand price | 6-month price | 2-year price |
---|---|---|---|---|---|---|---|---|---|
8H100.80S.176V | H100 SXM5 80GB | 8 | 176 | 1480 | 640 | / | $25.36/h | $24.35/h | $19.02/h |
4H100.80S.120V | H100 SXM5 80GB | 4 | 120 | 480 | 320 | / | $12.68/h | $12.17/h | $9.51/h |
4H100.80S.88V | H100 SXM5 80GB | 4 | 88 | 740 | 320 | / | $12.68/h | $12.17/h | $9.51/h |
2H100.80S.60V | H100 SXM5 80GB | 2 | 60 | 240 | 160 | / | $6.34/h | $6.09/h | $4.75/h |
1H100.80S.30V | H100 SXM5 80GB | 1 | 30 | 120 | 80 | / | $3.17/h | $3.04/h | $2.38/h |
DataCrunch Cloud
Where speed meets simplicity in GPU solutionsGPU clusters tailored to your needs
-
16x H100 Cluster
4x 4H100 bare metal systems
2x Intel Xeon Platinum 8462Y+, total 64C/128T
512GB RAM
7.68TB local NVMe
800gbit InfiniBand
100gbit ethernet
Uplink 5gbit/s
Storage
Up to 300 TBTier 112 GB/sUp to 2 PBTier 23 GB/sUp to 10 PBTier 31 GB/s -
32x H100 Cluster
4x 8H100 bare metal systems
192C/384T AMD Genoa per node
1536GB DDR5 per node
7.68TB local NVMe
1600gbit InfiniBand
100gbit ethernet
Uplink 5gbit/s
Storage
Up to 300 TBTier 112 GB/sUp to 2 PBTier 23 GB/sUp to 10 PBTier 31 GB/s -
64x H100 Cluster
8x 8H100 bare metal systems
192C/384T AMD Genoa per node
1536GB DDR5 per node
7.68TB local NVMe
3200gbit InfiniBand
100gbit ethernet
Uplink 10gbit/s
Storage
Up to 300 TBTier 112 GB/sUp to 2 PBTier 23 GB/sUp to 10 PBTier 31 GB/s -
128x H100 Cluster
16x 8H100 bare metal systems
192C/384T AMD Genoa per node
1536GB DDR5 per node
7.68TB local NVMe
3200gbit InfiniBand
100gbit ethernet
Uplink 25gbit/s
Storage
Up to 300 TBTier 112 GB/sUp to 2 PBTier 23 GB/sUp to 10 PBTier 31 GB/s