
Instant GPU Clusters
Immediate, self-service access to multi-node clusters for large-scale AI training
H200 SXM5
B200 SXM6
16x
24x
32x
40x
48x
56x
64x
Instant Clusters
Specifications

H200 SXM5
Each 8x GPU node contains:
Flexible capacity for your use case:
16x
24x
32x
40x
48x
56x
64x

coming soon
B200 SXM6
Each 8x GPU node contains:
Flexible capacity for your use case:
16x
24x
32x
40x
48x
56x
64x
Pricing
Cluster size | 1 day | 1 week -3% | 2 weeks -6% | 4 weeks -10% |
---|---|---|---|---|
16H200 | $60.72/h | $58.90/h | $57.08/h | $54.65/h |
24H200 | $91.08/h | $88.35/h | $85.62/h | $81.97/h |
32H200 | $121.44/h | $117.80/h | $114.15/h | $109.30/h |
40H200 | $151.80/h | $147.25/h | $142.69/h | $136.62/h |
48H200 | $182.16/h | $176.70/h | $171.23/h | $163.94/h |
56H200 | $212.52/h | $206.14/h | $199.77/h | $191.27/h |
64H200 | $242.88/h | $235.59/h | $228.31/h | $218.59/h |
nx B200 | coming soon | coming soon | coming soon | coming soon |

Secure and sustainable
Designed for ML engineers
Our clusters offer high uptime and rapid recovery, minimizing downtime disruptions. Hosted in carbon-neutral data centers, we select locations with excellent renewable energy practices, utilizing sources such as sources like nuclear, hydro, wind and geothermal.
Dependable performance and affordable high-throughput storage, adhering to the highest security standards.
-
High-speed network
High performance servers with up to 3200 Gbps RDMA interconnects, such as Infiniband -
Seamless scaling
Expand your compute capacity for AI training at short notice and for short periods of time -
Expert support
Our engineers specialize in hardware configured for ML and are always available to assist -
Secure and reliable
Hosted in European GDPR regulated countries, ISO 27001 certified. Historical uptime of over 99.9% -
Cost-effective
Secure GPU access at up to 90% lower costs than major cloud providers. Long-term plans available -
100% renewable energy
Hosted in efficient Nordic data centers that utilize 100% renewable energy sources
Customer feedback
What they say about us...
-
Having direct contact between our engineering teams enables us to move incredibly fast. Being able to deploy any model at scale is exactly what we need in this fast moving industry. DataCrunch enables us to deploy custom models quickly and effortlessly.
Iván de Prado Head of AI at Freepik -
From deployment to training, our entire language model journey was powered by DataCrunch's clusters. Their high-performance servers and storage solutions allowed us to run smooth operations and maximum uptime, and to to focus on achieving exceptional results without worrying about hardware issues.
José Pombal AI Research Scientist at Unbabel -
DataCrunch powers our entire monitoring and security infrastructure with exceptional reliability. We also enforce firewall restrictions to protect against unauthorized access. Thanks to DataCrunch, our training clusters run smoothly and securely.
Nicola Sosio ML Engineer at Prem AI