B200 virtual dedicated servers are powered by:
Up to 8x on-demand NVIDIA® HGX B200 180GB GPUs using Blackwell Tensor Core technology.
This is the latest hardware from NVIDIA available on the market, purpose-built to accelerate inference for LLMs and MoE models.
We deploy the HGX platform with NVLink interconnect, offering up to 1.8TB/s P2P NVLink bandwidth.
Instance name | GPU model | GPU | CPU | RAM | VRAM | P2P | On demand price | 6-month price | 2-year price |
---|---|---|---|---|---|---|---|---|---|
8B200.248V | B200 SXM6 180GB | 8 | 248 | 2000 | 1440 | 1.8 TB/s | $35.92/h | $34.48/h | $26.94/h |
4B200.124V | B200 SXM6 180GB | 4 | 124 | 1000 | 720 | 1.8 TB/s | $17.96/h | $17.24/h | $13.47/h |
2B200.62V | B200 SXM6 180GB | 2 | 62 | 500 | 360 | 1.8 TB/s | $8.98/h | $8.62/h | $6.74/h |
1B200.31V | B200 SXM6 180GB | 1 | 31 | 250 | 180 | / | $4.49/h | $4.31/h | $3.37/h |
H200 virtual dedicated servers are powered by:
Up to 8 NVIDIA H200 GPUs (141GB, 16896 CUDA cores, 528 Tensor Cores each) in SXM5 form-factor with 4.8TB/s memory bandwidth and 900GB/s P2P bandwidth.
Powered by 4th gen AMD Genoa processors with 384 threads at 3.7GHz boost clock.
The 8H200 VM maintains the same vCPU count as the 4H200 but delivers higher performance by using only physical cores instead of hyper-threads.
Instance name | GPU model | GPU | CPU | RAM | VRAM | P2P | On demand price | 6-month price | 2-year price |
---|---|---|---|---|---|---|---|---|---|
8H200.141S.176V | H200 SXM5 141GB | 8 | 176 | 1450 | 1128 | 900 GB/s | $26.40/h | $25.34/h | $19.80/h |
4H200.141S.176V | H200 SXM5 141GB | 4 | 176 | 740 | 564 | 900 GB/s | $13.20/h | $12.67/h | $9.90/h |
2H200.141S.88V | H200 SXM5 141GB | 2 | 88 | 370 | 282 | 900 GB/s | $6.60/h | $6.34/h | $4.95/h |
1H200.141S.44V | H200 SXM5 141GB | 1 | 44 | 185 | 141 | / | $3.30/h | $3.17/h | $2.47/h |
H100 virtual dedicated servers are powered by:
Up to 8 NVIDIA H100 80GB GPUs (16896 CUDA cores, 528 Tensor Cores each) in SXM5 NVLINK module with 3TB/s memory bandwidth and 900GB/s P2P bandwidth.
Powered by 4th gen AMD Genoa with 384 threads at 3.7GHz boost.
Model 8H100.80S.176V indicates 8 H100 SXM5 GPUs with 176 CPU threads, virtualized.
The 8H100 VM maintains same vCPU count as 4H100 but delivers higher performance using only physical cores instead of hyper-threads.
Instance name | GPU model | GPU | CPU | RAM | VRAM | P2P | On demand price | 6-month price | 2-year price |
---|---|---|---|---|---|---|---|---|---|
8H100.80S.176V | H100 SXM5 80GB | 8 | 176 | 1480 | 640 | 900 GB/s | $17.52/h | $16.82/h | $13.14/h |
4H100.80S.176V | H100 SXM5 80GB | 4 | 176 | 740 | 320 | 900 GB/s | $8.76/h | $8.41/h | $6.57/h |
2H100.80S.80V | H100 SXM5 80GB | 2 | 80 | 370 | 160 | 900 GB/s | $4.38/h | $4.20/h | $3.29/h |
1H100.80S.32V | H100 SXM5 80GB | 1 | 32 | 185 | 80 | / | $2.19/h | $2.10/h | $1.64/h |
1H100.80S.30V | H100 SXM5 80GB | 1 | 30 | 120 | 80 | / | $2.19/h | $2.10/h | $1.64/h |
A100 virtual dedicated servers are powered by:
Up to 8 NVIDIA® A100 80GB GPUs, each containing 6912 CUDA cores and 432 Tensor Cores.
We only use the SXM4 'for NVLINK' module, which offers a memory bandwidth of over 2TB/s and Up to 600GB/s P2P bandwidth.
Second generation AMD EPYC Rome, up to 192 threads with a boost clock of 3.3GHz.
The name 8A100.176V is composed as follows: 8x RTX A100, 176 CPU core threads & virtualized.
Instance name | GPU model | GPU | CPU | RAM | VRAM | P2P | On demand price | 6-month price | 2-year price |
---|---|---|---|---|---|---|---|---|---|
8A100.176V | A100 SXM4 80GB | 8 | 176 | 960 | 640 | 600 GB/s | $14.00/h | $13.44/h | $10.50/h |
4A100.88V | A100 SXM4 80GB | 4 | 88 | 480 | 320 | 300 GB/s | $7.00/h | $6.72/h | $5.25/h |
2A100.44V | A100 SXM4 80GB | 2 | 44 | 240 | 160 | 100 GB/s | $3.50/h | $3.36/h | $2.63/h |
1A100.22V | A100 SXM4 80GB | 1 | 22 | 120 | 80 | / | $1.75/h | $1.68/h | $1.31/h |
8A100.40S.176V | A100 SXM4 40GB | 8 | 176 | 960 | 320 | 600 GB/s | $10.32/h | $9.91/h | $7.74/h |
1A100.40S.22V | A100 SXM4 40GB | 1 | 22 | 120 | 40 | / | $1.29/h | $1.24/h | $0.97/h |
L40S virtual dedicated servers are powered by:
Up to 8 NVIDIA® L40S 48GB GPUs, each containing 18176 CUDA cores and 568 fourth-generation Tensor Cores.
Featuring 864GB/s memory bandwidth, 1466 Tflops Tensor performance and 212 Tflops RT Core performance, L40S boasts excellent capability for handling large ML models, single-GPU training, and other 32-16-8 bit operations.
The name 8L40S.160V is composed as follows: 8x L40S, 160 CPU core threads & virtualized.
Instance name | GPU model | GPU | CPU | RAM | VRAM | P2P | On demand price | 6-month price | 2-year price |
---|---|---|---|---|---|---|---|---|---|
8L40S.160V | L40S 48GB | 8 | 160 | 480 | 384 | 50GB/s | $8.40/h | $8.06/h | $6.30/h |
4L40S.80V | L40S 48GB | 4 | 80 | 240 | 192 | 50GB/s | $4.20/h | $4.03/h | $3.15/h |
2L40S.40V | L40S 48GB | 2 | 40 | 120 | 96 | 50GB/s | $2.10/h | $2.02/h | $1.58/h |
1L40S.20V | L40S 48GB | 1 | 20 | 60 | 48 | / | $1.05/h | $1.01/h | $0.79/h |
RTX6000 ADA virtual dedicated servers are powered by:
Up to 8 NVIDIA® RTX6000 ADA [2022] GPUs, each containing 18176 CUDA cores, 568 fourth-generation Tensor Cores and 142RT cores.
Features 48GB of GDDR6 memory for working with the largest 3D models, render images, simulation and AI datasets.
The name 8RTX6000ADA.80V is composed as follows: 8x RTX6000ADA, 80 CPU core threads & virtualized.
Instance name | GPU model | GPU | CPU | RAM | VRAM | P2P | On demand price | 6-month price | 2-year price |
---|---|---|---|---|---|---|---|---|---|
8RTX6000ADA.80V | RTX6000 Ada 48GB | 8 | 80 | 480 | 384 | 50GB/s | $8.40/h | $8.06/h | $6.30/h |
4RTX6000ADA.40V | RTX6000 Ada 48GB | 4 | 40 | 240 | 192 | 50GB/s | $4.20/h | $4.03/h | $3.15/h |
2RTX6000ADA.20V | RTX6000 Ada 48GB | 2 | 20 | 120 | 96 | 50GB/s | $2.10/h | $2.02/h | $1.58/h |
1RTX6000ADA.10V | RTX6000 Ada 48GB | 1 | 10 | 60 | 48 | / | $1.05/h | $1.01/h | $0.79/h |
RTX A6000 virtual dedicated servers are powered by:
Up to 8 NVIDIA RTX A6000 [2021] GPUs (10752 CUDA cores, 336 Tensor Cores, 84 RT cores each) with faster tensor processing than V100 despite fewer tensor cores due to architectural improvements.
Powered by 2nd gen AMD EPYC Rome with 96 threads at 3.35GHz boost.
Features PCIe Gen4 for enhanced GPU communication.
Model 8A6000.80V indicates 8 RTX A6000 GPUs with 80 CPU threads, virtualized.
Instance name | GPU model | GPU | CPU | RAM | VRAM | P2P | On demand price | 6-month price | 2-year price |
---|---|---|---|---|---|---|---|---|---|
8A6000.80V | RTX A6000 48GB | 8 | 80 | 480 | 384 | 50GB/s | $7.12/h | $6.84/h | $5.34/h |
4A6000.40V | RTX A6000 48GB | 4 | 40 | 240 | 192 | 50GB/s | $3.56/h | $3.42/h | $2.67/h |
2A6000.20V | RTX A6000 48GB | 2 | 20 | 120 | 96 | 50GB/s | $1.78/h | $1.71/h | $1.33/h |
1A6000.10V | RTX A6000 48GB | 1 | 10 | 60 | 48 | / | $0.89/h | $0.85/h | $0.67/h |
V100 virtual dedicated servers are powered by:
Up to 8 NVIDIA® Tesla V100 GPUs, each containing 5120 CUDA cores and 640 Tensor Cores.
Second generation Xeon Scalable 4214R CPUs [2020], up to 48 threads with a boost clock of 3.5GHz.
NVLink for high bandwidth P2P communication.
The name 4V100.20V is composed as follows: 4x V100, 20 CPU core threads & virtualized.
Instance name | GPU model | GPU | CPU | RAM | VRAM | P2P | On demand price | 6-month price | 2-year price |
---|---|---|---|---|---|---|---|---|---|
8V100.48V | Tesla V100 16GB | 8 | 48 | 180 | 128 | NVLink up to 50GB/s | $1.92/h | $1.84/h | $1.44/h |
4V100.20V | Tesla V100 16GB | 4 | 20 | 90 | 64 | NVLink up to 50GB/s | $0.96/h | $0.92/h | $0.72/h |
2V100.10V | Tesla V100 16GB | 2 | 10 | 45 | 32 | NVLink up to 50GB/s | $0.48/h | $0.46/h | $0.36/h |
1V100.6V | Tesla V100 16GB | 1 | 6 | 23 | 16 | / | $0.24/h | $0.23/h | $0.18/h |
CPU virtual dedicated servers are powered by:
Second or Third generation AMD EPYC Rome or Milan.
All hardware is dedicated to your server for the best performance.
The name CPU.32V indicates the server runs on 32 virtualized core threads.
Instance name | CPU model | CPU | RAM | P2P | On demand price | 6-month price | 2-year price |
---|---|---|---|---|---|---|---|
CPU.360V.1440G | AMD EPYC Genoa | 360 | 1440 | / | $3.60/h | $3.46/h | $2.70/h |
CPU.180V.720G | AMD EPYC Genoa | 180 | 720 | / | $1.80/h | $1.73/h | $1.35/h |
CPU.120V.480G | AMD EPYC Rome/Milan | 120 | 480 | / | $1.20/h | $1.15/h | $0.90/h |
CPU.96V.384G | AMD EPYC Rome/Milan | 96 | 384 | / | $0.96/h | $0.92/h | $0.72/h |
CPU.64V.256G | AMD EPYC Rome/Milan | 64 | 256 | / | $0.64/h | $0.61/h | $0.48/h |
CPU.32V.128G | AMD EPYC Rome/Milan | 32 | 128 | / | $0.32/h | $0.31/h | $0.24/h |
CPU.16V.64G | AMD EPYC Rome/Milan | 16 | 64 | / | $0.16/h | $0.15/h | $0.12/h |
CPU.8V.32G | AMD EPYC Rome/Milan | 8 | 32 | / | $0.08/h | $0.08/h | $0.06/h |
CPU.4V.16G | AMD EPYC Rome/Milan | 4 | 16 | / | $0.04/h | $0.04/h | $0.03/h |
Storage
Our instances connect to storage systems providing both block volumes and shared file-systems with seamless data availability and protection against hardware failures.
We offer high-performance NVMe storage with superior IOPS and bandwidth for intensive mixed workloads, plus HDD storage optimized for larger datasets and lighter workloads.
Default volume size limits can be increased upon request.
Type | Continuous Bandwidth [MB/s] | Burst Bandwidth [MB/s] | IOPS | Internal Network Speed [GBit/s] | Price [$/GB/Month] |
---|---|---|---|---|---|
NVMe | 2000 | 2500 | 100k | 50 | 0.2 |
HDD | 250 | 2000 | 300 | 50 | 0.05 |
NVMe_Shared | 2000 | 2500 | 100k | 50 | 0.2 |