NVIDIA® Blackwell Clusters: Available Soon
H100-SXM5
Sign up and deploy within minutes

On-demand NVIDIA® H100 80GB SXM5

Fixed-price instances from 2.19/h*

Deploy now
Partners who trust our services:
  • Freepik
  • Black Forest
  • 1X
  • ManifestAI
  • Nex
  • Sony
  • Harward University
  • NEC
  • Korea University
  • MIT University
  • Findable
  • Freepik
  • Black Forest
  • 1X
  • ManifestAI
  • Nex
  • Sony
  • Harward University
  • NEC
  • Korea University
  • MIT University
  • Findable

NEW FEATURE

Dynamic pricing

Save up to 49% on GPU instances

This transparent pricing model adjusts daily based on market demand, providing a flexible and cost-effective alternative to fixed pricing for cloud GPU instances.

Users can optimize expenses by benefiting from lower prices during periods of low demand while enjoying predictable daily adjustments.

Pricing comparison

H100-SXM5

Experience the best available GPU for Machine Learning

At the forefront of digital intelligence

Our servers exclusively use the NVIDIA H100 SXM5 80GB NVLINK modules.

Via NVLINK, the H100 achieves a chip-to-chip interconnect bandwidth of 900GB/s and leverages a 3200gbit/s InfiniBand™ interconnect.

$1.64/h for 2-year contract
$2.19/h Pay As You Go instance

Why SXM5?

Unmatched performance and speed
H100

H100 virtual dedicated servers are powered by:

Up to 8 NVIDIA H100 80GB GPUs (16896 CUDA cores, 528 Tensor Cores each) in SXM5 NVLINK module with 3TB/s memory bandwidth and 900GB/s P2P bandwidth.

Powered by 4th gen AMD Genoa with 384 threads at 3.7GHz boost.

Model 8H100.80S.176V indicates 8 H100 SXM5 GPUs with 176 CPU threads, virtualized.

The 8H100 VM maintains same vCPU count as 4H100 but delivers higher performance using only physical cores instead of hyper-threads.

Instance name GPU model GPU CPU RAM VRAM P2P On demand price 6-month price 2-year price
8H100.80S.176V H100 SXM5 80GB 8 176 1480 640 900 GB/s $17.52/h $16.82/h $13.14/h
4H100.80S.176V H100 SXM5 80GB 4 176 740 320 900 GB/s $8.76/h $8.41/h $6.57/h
2H100.80S.80V H100 SXM5 80GB 2 80 370 160 900 GB/s $4.38/h $4.20/h $3.29/h
1H100.80S.32V H100 SXM5 80GB 1 32 185 80 / $2.19/h $2.10/h $1.64/h
1H100.80S.30V H100 SXM5 80GB 1 30 120 80 / $2.19/h $2.10/h $1.64/h
*Note: Price and discount are fixed after deployment, except for dynamically-priced instances, which update daily.

DataCrunch Cloud

Where speed meets simplicity in GPU solutions
Fast Dedicated hardware for max speed and security
Productive Start, stop, hibernate instantly via the dashboard or API
Reliable A historical uptime of over 99.9%
Protected DataCrunch is ISO27001 certified and GDPR compliant
Reserve your perfect setup today

GPU clusters tailored to your needs

  • 16x H100 Cluster

    2x 8H100 bare metal systems

    192C/384T AMD Genoa per node

    1536GB DDR5 per node

    7.68TB local NVMe

    3200 Gbit InfiniBand

    100 Gbit ethernet

    Uplink 5 Gbit/s


    Storage
    Up to 300 TB
    Tier 1
    12 GB/s
    Up to 2 PB
    Tier 2
    3 GB/s
    Up to 10 PB
    Tier 3
    1 GB/s
  • 32x H100 Cluster

    4x 8H100 bare metal systems

    192C/384T AMD Genoa per node

    1536GB DDR5 per node

    7.68TB local NVMe

    3200 Gbit InfiniBand

    100 Gbit ethernet

    Uplink 5 Gbit/s


    Storage
    Up to 300 TB
    Tier 1
    12 GB/s
    Up to 2 PB
    Tier 2
    3 GB/s
    Up to 10 PB
    Tier 3
    1 GB/s
  • 64x H100 Cluster

    8x 8H100 bare metal systems

    192C/384T AMD Genoa per node

    1536GB DDR5 per node

    7.68TB local NVMe

    3200 Gbit InfiniBand

    100 Gbit ethernet

    Uplink 10 Gbit/s


    Storage
    Up to 300 TB
    Tier 1
    12 GB/s
    Up to 2 PB
    Tier 2
    3 GB/s
    Up to 10 PB
    Tier 3
    1 GB/s
  • 128x H100 Cluster

    16x 8H100 bare metal systems

    192C/384T AMD Genoa per node

    1536GB DDR5 per node

    7.68TB local NVMe

    3200 Gbit InfiniBand

    100 Gbit ethernet

    Uplink 25 Gbit/s


    Storage
    Up to 300 TB
    Tier 1
    12 GB/s
    Up to 2 PB
    Tier 2
    3 GB/s
    Up to 10 PB
    Tier 3
    1 GB/s
Looking for something different? Contact us