Available soon H200 clusters

Cloud GPU Pricing Comparison in 2024

10 min read
Cloud GPU Pricing Comparison in 2024

If you’re looking to deploy virtual machines for your next AI training or inference project, you may have noticed how the cloud GPU market is a wild place at the moment. 🤠

Both hyperscalers and new AI-focused startups compete fiercely to get access to the latest premium cloud GPUs. The cost of GPU instances and clusters varies greatly between different service providers. It’s not easy to get a clear view of your options. That’s why we’re here to help.

In this article, we go through some of the common options for deploying cloud GPU instances, including Google Cloud Platform, Amazon AWS, and Microsoft Azure. We also assess three of the best-known cloud computing providers—OVH, Scaleway, and Paperspace—and introduce you to DataCrunch as a new option to consider.

Cloud GPU Hyperscaler Pricing

Many AI developers look first to hyperscalers such as Google or Amazon when starting their cloud computing journey. This option makes sense because many people may have worked with a preferred cloud computing platform in the past and know how it works. It is also not uncommon for the big providers to offer large amounts of free credits for startups to use on AI-related projects. Congrats if you got your credits—here is how you can use them!

Google Cloud Platform GPU Pricing

The Google Cloud Platform offers virtual machines for machine learning use cases through the Compute Engine solution. Each GPU that GCP offers has an internal model name, such as a3-highgpu-8g. You also have multiple international data center locations to choose from.

GPU

Memory

Machine Type

Region

Price per hour

H100

80GB

a3-highgpu-1g

us-central1

Not available

H100 x8

80 GB

a3-highgpu-8g

europe-west4

$88.49

A100

80 GB

a2-ultragpu-1g

europe-west4

$5.58

A100 x2

80 GB

a2-ultragpu-2g

europe-west4

$11.12

A100 x4

80 GB

a2-ultragpu-4g

europe-west4

$22.32

A100 x8

80GB

a2-ultragpu-8g

europe-west4

$44.65

A100

40 GB

a2-highgpu-1g

europe-west4

$3.78

A100 x2

40 GB

a2-highgpu-2g

europe-west4

$7.49

A100 x4

40 GB

a2-highgpu-4g

europe-west4

$14.99

A100 x8

40GB

a2-highgpu-8g

europe-west4

$29.98

Currently, the NVIDIA H100 is only offered by GCP on an 8-GPU instance from the us-central1 region for $88.49 per hour. The Google Cloud Platform has many different options to choose from for both 80GB and 40GB versions of the A100.

Google also offers older cloud GPU options, including the NVIDIA V100 Tensor Core GPU at a cost of $2.48 per GPU for up to 8x V100 instances. In a direct comparison, GCP was a bit more affordable than Amazon AWS and Microsoft Azure.

Machine Type

GPU

Memory

Region

Price per Hour

nvidia-tesla-v100-1g

V100

16 GB

europe-west4-b

$2.48

nvidia-tesla-v100-2g

V100 x2

32 GB

europe-west4-b

$4.96

nvidia-tesla-v100-4g

V100 x4

64 GB

europe-west4-b

$9.92

nvidia-tesla-v100-8g

V100 x8

128 GB

europe-west4-b

$19.84

For more information and the latest costs, see GCP Compute Pricing.

Amazon AWS Cloud GPU Pricing

Amazon offers a number of cloud GPUs through AWS EC2. Each virtual machine has its own Amazon model naming convention for instance types, e.g., “p5.48xlarge.” This AWS instance naming convention can take some time to decypher, so here is a quick cheat sheet of different options.

Model

GPU

GPU Memory

Region

Price per hour

p5e.48xlarge

H200 x8

1128 GB

Not available

N/A

p5.48xlarge

H100 x8

640 GB

us-east-1

$98.32

p4d.24xlarge

A100 40GB x8

320 GB

us-east-1

$32.77

p4de.24xlarge

A100 80GB x8

640 GB

Not available

N/A

p3.2xlarge

V100 x1

16 GB

us-east-1

$3.06

p3.8xlarge

V100 x4

64 GB

us-east-1

$12.24

p3.16xlarge

V100 x8

128 GB

us-east-1

$24.48

The H100 is currently offered only in 8-GPU instances at a price of $98.32 per hour. AWS has only 40GB versions of the A100 available for a price of $32.77 for a 8x GPU instance.

Amazon has multiple options on offer for the V100 from 1x to 8x GPU instances across various AWS EC2 regions starting from $3.06 per GPU or $24.48 for an 8xV100 instance.

While they have instance models listed, the H200 and A100 80GB are not currently available through AWS.

Microsoft Azure Cloud GPU Pricing

Microsoft Azure offers a number of high-performance cloud GPUs across different international data center locations.

Instance Name

GPU

GPU Memory

Region

Price per hour

NC40ads H100 v5

H100

80GB

East US

$6.98

NC80adis H100 v5

H100 x2

160 GB

East US

$13.96

NC24ads A100 v4

A100 80GB

80GB

East US

$3.67

NC48ads A100 v4

A100 80GB x2

160 GB

East US

$7.35

NC96ads A100 v4

A100 80GB x 4

320 GB

East US

$14.69

NC6s v3

V100

32 GB

East US

$3.06

NC12s v3

V100 x2

64 GB

East US

$6.12

NC24rs v3

V100 x4

128 GB

East US

$12.24

Currently the H100 is offered at $6.98 per GPU instance and the A100 comes with a number of options ranging from $3.67 for a single GPU to $14.69 for 4xA100 instances.

For the V100 there are also many options to choose from ranging from $3.06 per hour to $12.24 for a 4xV100 instance.

In addition to Windows, you can run Ubuntu and other Linux OS on your Azure virtual machine instance. For up-to-date pricing information on Microsoft Azure, see Linux Virtual Machine Pricing.

GCP vs. Amazon AWS vs. Azure GPU Cost Comparison

It is not easy to exactly compare Amazon AWS, Google Cloud Platform, and Microsoft Azure cloud GPU prices because there is not much overlap in the GPU instance offerings. All three solution providers do currently offer the V100 at similar prices, which Amazon AWS and Azure offering the exact same $3.06 per hour rate.

aws vs gcp vs azure cloud gpu cost comparison vm price

What are your other options?

In addition to Google, Amazon and Microsoft there are other options to consider in the Hyperscaler front. Enterprise IT companies like Oracle, IBM and HP all offer some level of cloud computing capacity. To get an accurate understanding of pricing you may need to contact them directly.

If you’re simply looking for a cost-effective option, you’re not likely to find the best deal from Amazon, Google, Microsoft or other hyperscalers. ML-focused cloud GPU providers like DataCrunch offer the same high performance GPUs up to 8x cheaper. For example, the current cost for the V100 on the DataCrunch Cloud Platform is just $0.39 per hour and the H100 is available for $3.35/hour.

Cloud Computing Providers

If you’re looking beyond the hyperscalers, there are a number of established large cloud computing and hosting providers that provide GPU virtual machines. Three of the most common options to consider include OVH, Paperspace and Scaleway.

OVHcloud Cloud GPU Pricing

OVHcloud is Europe’s largest cloud computing provider. With headquarters in France OVH has customers across the globe with data centers across multiple international locations. In addition to various CPU VM instances OVHcloud has on-demand instances of popular cloud GPUs available.

Instance Name

GPU

GPU Memory

Price per hour

h100-380

H100

80 GB

$ 2.99

h100-760

H100 x2

160 GB

$ 5.98

h100-1520

H200 x4

320 GB

$ 11.97

a100-180

A100

80 GB

$ 3.07

a100-360

A100 x2

160 GB

$ 6.15

a100-720

A100 x4

320 GB

$ 12.29

t1-45

V100

16 GB

$1.97

l1-90

V100 x2

32 GB

$ 3.94

t1-180

V100 x4

64 GB

$ 7.89

l40s-90

L40S

48 GB

$1.80

l40s-180

L40S x2

96 GB

$ 3.60

l40s-360

L40S x4

192 GB

$ 7.20

OVHcloud offers high performance NVIDIA instances in a number of different configurations and options. It is the only cloud GPU provider to offer the H100 at a lower hourly price point than the A100.

OVHcloud also has multiple instances of the V100 and L40S available in addition to dedicated servers and data hosting services.

Up-to-date pricing from OVH can be found here.

Paperspace Cloud GPU Pricing

Paperspace is a major cloud computing platform focused on deploying machine learning models owned by Digital Ocean, the web hosting company.

Paperspace offers Multi-GPU instances of the H100, A100 and V100. In addition, they offer various RTX-based GPUs on demand, including the RTX A6000, RTX 5000 and RTX 4000.

The current hourly price for an H100 GPU instance from Paperspace is $5.95, but they do offer discounts on multi-year commitments.

GPU

GPU Memory

Price per hour

H100

80 GB

$ 5.95

H100 x8

640 GB

$ 47.60

A100 x8

640 GB

$ 25.44

V100

32 GB

$ 2.30

V100 x2

64 GB

$ 4.60

V100 x4

128 GB

$ 9.20

RTX A6000

48 GB

$ 1.89

More information on Paperspace pricing can be found here.

Scaleway Cloud GPU Pricing

Scaleway is another large web hosting and cloud computing company with headquarters in France and a presence in many international locations. Like OVHcloud Scaleway offer a broad range of virtual machines and cloud storage options, including cloud GPUs instances.

Instance Name

GPU

GPU Memory

Price per hour

H100-1-80G

H100

80 GB

$ 2.73

H100-2-80G

H100 x2

160 GB

$ 5.46

L40S-1-48G

L40S

48 GB

$ 1.40

L40S-2-48G

L40S x2

96 GB

$ 2.80

L40S-4-48G

L40S x4

192 GB

$ 5.60

L40S-8-48G

L40S x8

384 GB

$ 11.20

Scaleway currently offers one of the lowest hourly prices for the H100 instance among large cloud GPU providers. In addition, it also has L40S and P100 instances on offer.

Scaleway also provides dedicated servers and bare metal solutions.

More information on latest Scaleway GPU pricing can be found here.

DataCrunch Cloud GPU Pricing

If you’re open to working with new cloud GPU providers, you should consider DataCrunch. We represent a new generation of AI-focused accelerated computing platforms built by AI engineers for AI engineers.

We only work with premium NVIDIA cloud GPUs and emphasise speed, value for money and efficiency in both our hardware and software solutions. In some cases, we’ve been able to reduce the cost of AI inference by 70% simply by getting more efficiency out of NVIDIA’s hardware stack.

In a direct cost comparison our fixed cost hourly GPU instance rates are competitive. Not only that, we are the first cloud GPU platform to offer dynamic pricing - a new way to reduce the cost of AI-focused computing on our less-used inventory.

This is how dynamic pricing works. When we have spare capacity on our GPU models, like the RTX 6000 ADA example below, we offer a discounted rate for hourly GPU instances. You can save up to 40% on the cost of GPU instances by picking dynamic pricing instead of a fixed price at off-peak times and GPUs with less market competition.

cloud gpu cost - vm for ai dynamic pricing

At DataCrunch, we don’t just stand out for our flexible and fair pricing. Our data centers utilize 100% renewable energy and enterprise-grade security. We’re often praised for the expert-level service we can provide customers.

The best way to see if DataCrunch is a good option for you is to spin up an instance on our intuitive and easy-to-use GPU Cloud Platform.

To get an even better deal you can reach out to us to request a quote on bare metal clusters or serverless inference solutions.

All of the pricing and cost information provided here is accurate as of September 2024. If you find any errors, you can reach out to us on Reddit or X.