NVIDIA® H200 Instances and Clusters Available
A100-SXM
Sign up and deploy within minutes

On-demand NVIDIA® A100 SXM 80GB and 40GB

Dynamic-price instances from 1.24/h*

Deploy now
Partners who trust our services:
  • Freepik
  • Black Forest
  • ManifestAI
  • Nex
  • Sony
  • Harward University
  • NEC
  • Korea University
  • MIT University
  • Findable
  • Freepik
  • Black Forest
  • ManifestAI
  • Nex
  • Sony
  • Harward University
  • NEC
  • Korea University
  • MIT University
  • Findable

NEW FEATURE

Dynamic pricing

Save up to 49% on GPU instances

This transparent pricing model adjusts daily based on market demand, providing a flexible and cost-effective alternative to fixed pricing for cloud GPU instances.

Users can optimize expenses by benefiting from lower prices during periods of low demand while enjoying predictable daily adjustments.

Pricing comparison

A100-SXM

A100 SXM 80GB and 40GB instances

At the forefront of digital intelligence

Our servers exclusively use the SXM4 'for NVLINK' module, which offers a memory bandwidth of over 2TB/s and Up to 600GB/s P2P bandwidth.

A100 80GB reaches up to 1.3 TB of unified memory per node and delivers up to a 3X throughput increase over A100 40GB

80GB

$1.24/h for 2-year contract
$1.65/h Pay As You Go instance

40GB

$0.97/h for 2-year contract
$1.29/h Pay As You Go instance

80GB vs 40GB

Push the limits of compute
A100 80GB reaches up to 1.3 TB of unified memory per node and delivers up to a 3X throughput increase over A100 40GB
GPU Model A100 SXM4 80GB A100 SXM4 40GB
Memory type HBM2e HBM2
Memory Clock speed 3.2 GB/s 2.4 GB/s
Memory Bandwidth 2.039 GB/s 1,555 GB/s
GPU Model Memory type Memory Clock speed Memory Bandwidth
A100 SXM4 80GB HBM2e 3.2 GB/s 2.039 GB/s
A100 SXM4 40GB HBM2 2.4 GB/s 1,555 GB/s
A100

A100 virtual dedicated servers are powered by:

Up to 8 NVIDIA® A100 80GB GPUs, each containing 6912 CUDA cores and 432 Tensor Cores.

We only use the SXM4 'for NVLINK' module, which offers a memory bandwidth of over 2TB/s and Up to 600GB/s P2P bandwidth.

Second generation AMD EPYC Rome, up to 192 threads with a boost clock of 3.3GHz.

The name 8A100.176V is composed as follows: 8x RTX A100, 176 CPU core threads & virtualized.

Instance name GPU model GPU CPU RAM VRAM P2P On demand price 6-month price 2-year price
8A100.176V A100 SXM4 80GB 8 176 960 640 600 GB/s $13.20/h $12.67/h $9.90/h
4A100.88V A100 SXM4 80GB 4 88 480 320 300 GB/s $6.60/h $6.34/h $4.95/h
2A100.44V A100 SXM4 80GB 2 44 240 160 100 GB/s $3.30/h $3.17/h $2.47/h
1A100.22V A100 SXM4 80GB 1 22 120 80 / $1.65/h $1.58/h $1.24/h
8A100.40S.176V A100 SXM4 40GB 8 176 960 320 / $10.32/h $9.91/h $7.74/h
4A100.40S.88V A100 SXM4 40GB 4 88 480 160 / $5.16/h $4.95/h $3.87/h
2A100.40S.44V A100 SXM4 40GB 2 44 240 80 / $2.58/h $2.48/h $1.94/h
1A100.40S.22V A100 SXM4 40GB 1 22 120 40 / $1.29/h $1.24/h $0.97/h
*Note: Price and discount are fixed after deployment, except for dynamically-priced instances, which update daily.

DataCrunch Cloud

Where speed meets simplicity in GPU solutions
Fast Dedicated hardware for max speed and security
Productive Start, stop, hibernate instantly via the dashboard or API
Expert support Engineers are available via chat on the dashboard
Protected DataCrunch is ISO27001 certified
Reserve your perfect setup today

GPU clusters tailored to your needs

Looking for something different? Contact us