DataCrunch.io logo

NVidia® Instances

Simple & clear. Easy to setup.
GPU instances image

The A100 virtual dedicated servers are powered by:

Up to 8 NVidia® A100 80GB GPUs, each containing 6912 CUDA cores and 432 Tensor Cores.

This is the current flagship silicon from NVidia®, unbeaten in raw performance for AI operations.

We only use the SXM4 'for NVLINK' module, which offers a memory bandwidth of over 2TB/s and Up to 600GB/s P2P bandwidth.

Second generation AMD EPYC Rome, up to 192 threads with a boost clock of 3.3GHz.

The name 8A100.176V is composed as follows: 8x RTX A100, 176 CPU core threads & virtualized.

NVidia® A100 image
INSTANCE NAMEGPU MODELGPUCPURAMVRAMP2PPRICE ON DEMAND6-MONTH PRICE2-YEAR PRICE
*Note: Once you deploy your instance, the price and discount are fixed and not subject to future changes.
NVidia® RTX A6000 image

The RTX A6000 virtual dedicated servers are powered by:

Up to 8 NVidia® RTX A6000 [2021] GPUs, each containing 10752 CUDA cores, 336 Tensor Cores and 84RT cores.

Despite having less tensor cores than the V100, it is able to process tensor operations faster due to a different architecture.

Second generation AMD EPYC Rome, up to 96 threads with a boost clock of 3.35GHz.

PCIe Gen4 for faster interactions between GPUs.

The name 8A6000.80V is composed as follows: 8x RTX A6000, 80 CPU core threads & virtualized.

INSTANCE NAMEGPU MODELGPUCPURAMVRAMP2PPRICE ON DEMAND6-MONTH PRICE2-YEAR PRICE
*Note: Once you deploy your instance, the price and discount are fixed and not subject to future changes.

The V100 virtual dedicated servers are powered by:

Up to 8 NVidia® Tesla V100 GPUs, each containing 5120 CUDA cores and 640 Tensor Cores.

Second generation Xeon Scalable 4214R CPUs [2020], up to 48 threads with a boost clock of 3.5GHz.

NVLink for high bandwidth P2P communicaton.

The name 4V100.20V is composed as follows: 4x V100, 20 CPU core threads & virtualized.

NVidia® Tesla V100 image
INSTANCE NAMEGPU MODELGPUCPURAMVRAMP2PPRICE ON DEMAND6-MONTH PRICE2-YEAR PRICE
*Note: Once you deploy your instance, the price and discount are fixed and not subject to future changes.
AMD EPYC CPU image

The CPU virtual dedicated servers are powered by:

Second or Third generation AMD EPYC Rome or Milan.

All hardware is dedicated to your server for the best performance.

The name CPU.32V indicates the server runs on 32 virtualized core threads.

INSTANCE NAMEGPU MODELGPUCPURAMVRAMP2PPRICE ON DEMAND6-MONTH PRICE2-YEAR PRICE
*Note: Once you deploy your instance, the price and discount are fixed and not subject to future changes.

Storage

Our instances run on a network storage cluster. This cluster allows us to constantly keep your data in 3 copies, to ensure redundancy in the event of hardware failure.

Our NVME cluster offers high IOPS and excellent continuous bandwidth, the HDD cluster is ideal for larger datasets. By default, the volume sizes are limited, however, the limits can be increased on demand.

NVME / HDD cluster image
TypeContinuous Bandwidth [MB/s]Burst Bandwidth [MB/s]IOPSInternal Network Speed [Gbit/s]Simultaneous CopiesPrice [$/GB/month]
NVME20002500100k5030.2
HDD25020003005030.05
TypeContinuous Bandwidth [MB/s]Burst Bandwidth [MB/s]IOPSInternal Network Speed [Gbit/s]Simultaneous CopiesPrice [$/GB/month]
TypeContinuous Bandwidth [MB/s]Burst Bandwidth [MB/s]IOPSInternal Network Speed [Gbit/s]Simultaneous CopiesPrice [$/GB/month]
NVME20002500100k5030.2
HDD25020003005030.05
*Note: Once you deploy your instance, the price and discount are fixed and not subject to future changes.
GPU instances
The DataCrunch servers run in an ISO27001 certified datacenter facility and are owned and operated solely by DataCrunch. All DataCrunch operations are ISO27001 certified.
divider
GPU instances
The facility offers redundant power, your data is secured by storing it in 3 copies at all times.
divider
GPU instances
The servers are dedicated: the hardware is allocated to your machine and your machine only.
divider
HPC clusters
Hardware acceleration support results in bare-metal performance.
divider