Available October 2024 NVIDIA® H200 clusters

NVidia® Instances

Simple & clear. Easy to set up.

Sign up
H200

The H200 virtual dedicated servers are powered by:

Up to 8 NVidia® H200 141GB GPUs, each containing 16896 CUDA cores and 528 Tensor Cores.

This is the current flagship silicon from NVidia®, unbeaten in raw performance for AI operations.

We only deploy the H200 in the SXM5 form-factor, which offers a memory bandwidth of 4.8TB/s and up to 900GB/s P2P bandwidth.

The servers utilize the fourth generation AMD Genoa, 384 threads with a boost clock of 3.7GHz, offering a substantial uplift in performance vs. alternatives.

Instance name GPU model GPU CPU RAM VRAM P2P On demand price 6-month price 2-year price
8H200.141S.352V H200 SXM5 141GB 8 352 1480 1128 900 GB/s $28.72/h $27.57/h $21.54/h
4H200.141S.176V H200 SXM5 141GB 4 176 740 564 900 GB/s $14.36/h $13.79/h $10.77/h
2H200.141S.88V H200 SXM5 141GB 2 80 370 282 900 GB/s $7.18/h $6.89/h $5.38/h
1H200.141S.44V H200 SXM5 141GB 1 32 185 141 / $3.59/h $3.45/h $2.69/h
*Note: Price and discount are fixed after deployment, except for dynamically-priced instances, which update daily.
H100

The H100 virtual dedicated servers are powered by:

Up to 8 NVidia® H100 80GB GPUs, each containing 16896 CUDA cores and 528 Tensor Cores.

This is the current flagship silicon from NVidia®, unbeaten in raw performance for AI operations.

We deploy the SXM5 NVLINK module, which offers a memory bandwidth of 2.6 Gbps and up to 900GB/s P2P bandwidth.

Fourth generation AMD Genoa, up to 384 threads with a boost clock of 3.7GHz

The name 8H100.80S.360V is composed as follows: 8x H100 SMX5, 360 CPU core threads & virtualized.

Instance name GPU model GPU CPU RAM VRAM P2P On demand price 6-month price 2-year price
8H100.80S.176V H100 SXM5 80GB 8 176 1480 640 900 GB/s $26.80/h $25.73/h $20.10/h
4H100.80S.120V H100 SXM5 80GB 4 120 480 320 300 GB/s $13.40/h $12.86/h $10.05/h
4H100.80S.88V H100 SXM5 80GB 4 88 740 320 900 GB/s $13.40/h $12.86/h $10.05/h
2H100.80S.60V H100 SXM5 80GB 2 60 240 160 300 GB/s $6.70/h $6.43/h $5.03/h
1H100.80S.30V H100 SXM5 80GB 1 30 120 80 / $3.35/h $3.22/h $2.51/h
*Note: Price and discount are fixed after deployment, except for dynamically-priced instances, which update daily.
A100

The A100 virtual dedicated servers are powered by:

Up to 8 NVidia® A100 80GB GPUs, each containing 6912 CUDA cores and 432 Tensor Cores.

We only use the SXM4 'for NVLINK' module, which offers a memory bandwidth of over 2TB/s and Up to 600GB/s P2P bandwidth.

Second generation AMD EPYC Rome, up to 192 threads with a boost clock of 3.3GHz.

The name 8A100.176V is composed as follows: 8x RTX A100, 176 CPU core threads & virtualized.

Instance name GPU model GPU CPU RAM VRAM P2P On demand price 6-month price 2-year price
8A100.176V A100 SXM4 80GB 8 176 960 640 600 GB/s $15.12/h $14.52/h $11.34/h
4A100.88V A100 SXM4 80GB 4 88 480 320 300 GB/s $7.56/h $7.26/h $5.67/h
2A100.44V A100 SXM4 80GB 2 44 240 160 100 GB/s $3.78/h $3.63/h $2.83/h
1A100.22V A100 SXM4 80GB 1 22 120 80 / $1.89/h $1.81/h $1.42/h
8A100.40S.176V A100 SXM4 40GB 8 176 960 320 / $10.32/h $9.91/h $7.74/h
4A100.40S.88V A100 SXM4 40GB 4 88 480 160 / $5.16/h $4.95/h $3.87/h
2A100.40S.44V A100 SXM4 40GB 2 44 240 80 / $2.58/h $2.48/h $1.94/h
1A100.40S.22V A100 SXM4 40GB 1 22 120 40 / $1.29/h $1.24/h $0.97/h
*Note: Price and discount are fixed after deployment, except for dynamically-priced instances, which update daily.
L40S

The L40S virtual dedicated servers are powered by:

Up to 8 Nvidia® L40S 48GB GPUs, each containing 18176 CUDA cores and 568 Tensor Cores.

Features 864GB/s memory bandwidth.

Tensor Performance: 1466 Tflops

RT Core Performance: 212 Tflops

Single-precision performance: 91.6 Tflops

The name 8L40S.160V is composed as follows: 8x L40S, 160 CPU core threads & virtualized.

Instance name GPU model GPU CPU RAM VRAM P2P On demand price 6-month price 2-year price
8L40S.160V NVidia L40S 8 160 480 384 50GB/s $10.86/h $10.43/h $8.14/h
4L40S.80V NVidia L40S 4 80 240 192 50GB/s $5.43/h $5.21/h $4.07/h
2L40S.40V NVidia L40S 2 40 120 96 50GB/s $2.72/h $2.61/h $2.04/h
1L40S.20V NVidia L40S 1 20 60 48 / $1.36/h $1.31/h $1.02/h
*Note: Price and discount are fixed after deployment, except for dynamically-priced instances, which update daily.
RTX6000 ADA

The RTX6000 ADA virtual dedicated servers are powered by:

Up to 8 NVidia® RTX6000 ADA [2022] GPUs, each containing 18176 CUDA cores, 568 fourth-generation Tensor Cores and 142RT cores.

Features 48GB of GDDR6 memory for working with the largest 3D models, render images, simulation and AI datasets.

The name 8RTX6000ADA.80V is composed as follows: 8x RTX6000ADA, 80 CPU core threads & virtualized.

Instance name GPU model GPU CPU RAM VRAM P2P On demand price 6-month price 2-year price
8RTX6000ADA.80V NVidia RTX6000 Ada 48GB 8 80 480 384 50GB/s $9.52/h $9.14/h $7.14/h
4RTX6000ADA.40V NVidia RTX6000 Ada 48GB 4 40 240 192 50GB/s $4.76/h $4.57/h $3.57/h
2RTX6000ADA.20V NVidia RTX6000 Ada 48GB 2 20 120 96 50GB/s $2.38/h $2.28/h $1.78/h
1RTX6000ADA.10V NVidia RTX6000 Ada 48GB 1 10 60 48 / $1.19/h $1.14/h $0.89/h
*Note: Price and discount are fixed after deployment, except for dynamically-priced instances, which update daily.
RTX A6000

The RTX A6000 virtual dedicated servers are powered by:

Up to 8 NVidia® RTX A6000 [2021] GPUs, each containing 10752 CUDA cores, 336 Tensor Cores and 84RT cores.

Despite having less tensor cores than the V100, it is able to process tensor operations faster due to a different architecture.

Second generation AMD EPYC Rome, up to 96 threads with a boost clock of 3.35GHz.

PCIe Gen4 for faster interactions between GPUs.

The name 8A6000.80V is composed as follows: 8x RTX A6000, 80 CPU core threads & virtualized.

Instance name GPU model GPU CPU RAM VRAM P2P On demand price 6-month price 2-year price
8A6000.80V NVidia RTX A6000 48GB 8 80 480 384 50GB/s $8.06/h $7.74/h $6.04/h
4A6000.40V NVidia RTX A6000 48GB 4 40 240 192 50GB/s $4.03/h $3.87/h $3.02/h
2A6000.20V NVidia RTX A6000 48GB 2 20 120 96 50GB/s $2.01/h $1.93/h $1.51/h
1A6000.10V NVidia RTX A6000 48GB 1 10 60 48 / $1.01/h $0.97/h $0.76/h
*Note: Price and discount are fixed after deployment, except for dynamically-priced instances, which update daily.
V100

The V100 virtual dedicated servers are powered by:

Up to 8 NVidia® Tesla V100 GPUs, each containing 5120 CUDA cores and 640 Tensor Cores.

Second generation Xeon Scalable 4214R CPUs [2020], up to 48 threads with a boost clock of 3.5GHz.

NVLink for high bandwidth P2P communicaton.

The name 4V100.20V is composed as follows: 4x V100, 20 CPU core threads & virtualized.

Instance name GPU model GPU CPU RAM VRAM P2P On demand price 6-month price 2-year price
8V100.48V NVidia Tesla V100 16GB 8 48 180 128 NVLink up to 50GB/s $3.12/h $3.00/h $2.34/h
4V100.20V NVidia Tesla V100 16GB 4 20 90 64 NVLink up to 50GB/s $1.56/h $1.50/h $1.17/h
2V100.10V NVidia Tesla V100 16GB 2 10 45 32 NVLink up to 50GB/s $0.78/h $0.75/h $0.58/h
1V100.6V NVidia Tesla V100 16GB 1 6 23 16 / $0.39/h $0.37/h $0.29/h
*Note: Price and discount are fixed after deployment, except for dynamically-priced instances, which update daily.
CPU

The CPU virtual dedicated servers are powered by:

Second or Third generation AMD EPYC Rome or Milan.

All hardware is dedicated to your server for the best performance.

The name CPU.32V indicates the server runs on 32 virtualized core threads.

Instance name CPU model CPU RAM P2P On demand price 6-month price 2-year price
CPU.360V.1440G AMD EPYC 360 1440 / $3.60/h $3.46/h $2.70/h
CPU.180V.720G AMD EPYC 180 720 / $1.80/h $1.73/h $1.35/h
CPU.120V.480G AMD EPYC 120 480 / $1.20/h $1.15/h $0.90/h
CPU.96V.384G AMD EPYC 96 384 / $0.96/h $0.92/h $0.72/h
CPU.64V.256G AMD EPYC 64 256 / $0.64/h $0.61/h $0.48/h
CPU.32V.128G AMD EPYC 32 128 / $0.32/h $0.31/h $0.24/h
CPU.16V.64G AMD EPYC 16 64 / $0.16/h $0.15/h $0.12/h
CPU.8V.32G AMD EPYC 8 32 / $0.08/h $0.08/h $0.06/h
CPU.4V.16G AMD EPYC 4 16 / $0.04/h $0.04/h $0.03/h
*Note: Price and discount are fixed after deployment, except for dynamically-priced instances, which update daily.
Storage

Storage

Our instances run on a network storage cluster. This cluster allows us to constantly keep your data in 3 copies, to ensure redundancy in the event of hardware failure.

Our NVMe cluster offers high IOPS and excellent continuous bandwidth, the HDD cluster is ideal for larger datasets. By default, the volume sizes are limited, however, the limits can be increased on demand.

Type Continuous Bandwidth [MB/s] Burst Bandwidth [MB/s] IOPS Internal Network Speed [GBit/s] Price [$/GB/Month]
NVMe 2000 2500 100k 50 0.2
HDD 250 2000 300 50 0.05
*Note: Price and discount are fixed after deployment, except for dynamically-priced instances, which update daily.

NEW FEATURE

Dynamic pricing

Save up to 49% on GPU instances

This transparent pricing model adjusts daily based on market demand, providing a flexible and cost-effective alternative to fixed pricing for cloud GPU instances.

Users can optimize expenses by benefiting from lower prices during periods of low demand while enjoying predictable daily adjustments.

Pricing comparison

DataCrunch Instances

Where speed meets simplicity in GPU solutions
Fast Dedicated hardware for max speed and security
Productive Start, stop, hibernate instantly via the dashboard or API
Expert support Engineers are available via chat on the dashboard
Protected DataCrunch is ISO27001 certified