B200 Clusters with InfiniBand™ Self-service at $3.99/h
Target
Instant Clusters

Instant GPU Clusters

Immediate, self-service access to multi-node clusters

for large-scale AI training

Immediate, self-service access to multi-node clusters for large-scale AI training

B200 SXM6

H200 SXM5

Deploy now
Customers and Partners Who Trust DataCrunch
  • Freepik
  • Black Forest
  • 1X
  • ManifestAI
  • Nex
  • Sony
  • Harward University
  • NEC
  • Korea University
  • MIT University
  • Findable
  • Freepik
  • Black Forest
  • 1X
  • ManifestAI
  • Nex
  • Sony
  • Harward University
  • NEC
  • Korea University
  • MIT University
  • Findable
Scale up for large AI workloads with unmatched speed and flexibility

Instant Clusters

Rapid provisioning Access multi-node GPU clusters in minutes instead of days or weeks
Short-term contracts Scale your capacity for as little as 1 day without long-term commitments
Self-serve access Deploy clusters via the Cloud Dashboard without talking to sales
Peak performance Negligible virtualization overhead across compute, networking, and storage
Leverage cutting-edge compute, networking, and storage solutions

Specifications

B200 SXM6

Available now

B200 SXM6

Each 8x GPU node contains:

1440 GB GPU VRAM
240 cores AMD Turin CPU
3200 Gbit/s InfiniBand
100 Gbit/s Ethernet
5 Gbit/s Uplink
Deploy now
H200 SXM5

H200 SXM5

Each 8x GPU node contains:

1128 GB GPU VRAM
176 cores AMD Genoa CPU
3200 Gbit/s InfiniBand
100 Gbit/s Ethernet
1 Gbit/s Uplink
Deploy now
Fast and flexible access to multi-node clusters

Pricing

Contract length:
B200 SXM6 $3.99 per GPU/hr
H200 SXM5 $2.59 per GPU/hr
Deploy now Or check out our docs
Cluster

Secure and sustainable

Designed for ML engineers

Our clusters offer high uptime and rapid recovery, minimizing downtime disruptions. Hosted in carbon-neutral data centers, we select locations with excellent renewable energy practices, utilizing sources such as sources like nuclear, hydro, wind and geothermal.

Dependable performance and affordable high-throughput storage, adhering to the highest security standards.

  • High-speed network

    High performance servers with up to 3200 Gbps RDMA interconnects, such as Infiniband
  • Seamless scaling

    Expand your compute capacity for AI training at short notice and for short periods of time
  • Expert support

    Our engineers specialize in hardware configured for ML and are always available to assist
  • Secure and reliable

    Hosted in European GDPR regulated countries, ISO 27001 certified. Historical uptime of over 99.9%
  • Cost-effective

    Secure GPU access at up to 90% lower costs than major cloud providers. Long-term plans available
  • 100% renewable energy

    Hosted in efficient Nordic data centers that utilize 100% renewable energy sources

Customer feedback

What they say about us...

  • Quote

    Having direct contact between our engineering teams enables us to move incredibly fast. Being able to deploy any model at scale is exactly what we need in this fast moving industry. DataCrunch enables us to deploy custom models quickly and effortlessly.

    Iván de Prado Head of AI at Freepik
    Logo
  • Quote

    From deployment to training, our entire language model journey was powered by DataCrunch's clusters. Their high-performance servers and storage solutions allowed us to run smooth operations and maximum uptime, and to to focus on achieving exceptional results without worrying about hardware issues.

    José Pombal AI Research Scientist at Unbabel
    Logo
  • Quote

    DataCrunch powers our entire monitoring and security infrastructure with exceptional reliability. We also enforce firewall restrictions to protect against unauthorized access. Thanks to DataCrunch, our training clusters run smoothly and securely.

    Nicola Sosio ML Engineer at Prem AI
    Logo
  • Quote

    Quality of life went up. We don't have to deal with the quirks and preemptions we had on other platforms. Ultimately, a great developer experience is being able to run workloads when and where you need to, without sales delays, needing to contact support, or getting stuck in strange Docker environments.

    Lars Vagnes Founder & CEO
    Logo
  • Quote

    Having direct contact between our engineering teams enables us to move incredibly fast. Being able to deploy any model at scale is exactly what we need in this fast moving industry. DataCrunch enables us to deploy custom models quickly and effortlessly.

    Iván de Prado Head of AI at Freepik
    Logo
  • Quote

    From deployment to training, our entire language model journey was powered by DataCrunch's clusters. Their high-performance servers and storage solutions allowed us to run smooth operations and maximum uptime, and to to focus on achieving exceptional results without worrying about hardware issues.

    José Pombal AI Research Scientist at Unbabel
    Logo
  • Quote

    DataCrunch powers our entire monitoring and security infrastructure with exceptional reliability. We also enforce firewall restrictions to protect against unauthorized access. Thanks to DataCrunch, our training clusters run smoothly and securely.

    Nicola Sosio ML Engineer at Prem AI
    Logo
  • Quote

    Quality of life went up. We don't have to deal with the quirks and preemptions we had on other platforms. Ultimately, a great developer experience is being able to run workloads when and where you need to, without sales delays, needing to contact support, or getting stuck in strange Docker environments.

    Lars Vagnes Founder & CEO
    Logo

Meet our team

Quote

Our infrastructure team is hands-on with everything from provisioning GPUs to writing the software behind features like instant clusters, which, fun fact, got its first customer after overtime teamwork during a sauna session.

Artem Ikonnikov Infrastructure Team Lead Logo