NVIDIA® B300 SXM6
GPU Instances and Clusters
Early and instant access to the Blackwell GPUs starting at $1.24/h*
B300 SXM6 with DataCrunch
Where flexibility meets performance and simplicityGPU Instances
Instant Clusters
Bare-metal Clusters
B300 SXM6 Pricing
The fastest access to B300 SXM6 GPUs with reliable service and expert support-
$4.95/h
On-demand price -
$1.24/h
Spot instance
B300 SXM6 Specs
Designed for the most demanding AI and HPC workloads-
+55.6%
Faster dense FP4 performance (14 vs 9 PFLOPS) -
+55.6%
More GPU memory for larger models and batches
B300 virtual dedicated servers are powered by:
Up to 8 NVIDIA B300 GPUs (262GB, 18944 CUDA cores, 592 Tensor Cores each) in SXM6 form-factor with 8TB/s memory bandwidth and 1800GB/s P2P bandwidth.
Powered by 5th-gen AMD EPYC Turin processors with an up to 5GHz boost clock.
Blackwell Ultra is the latest hardware from NVIDIA available on the market, built to further accelerate inference for LLMs and MoE models compared to its predecessor.
| Instance type | GPU model | GPU | CPU | RAM | VRAM | P2P | On demand price | Dynamic price | Spot price |
|---|---|---|---|---|---|---|---|---|---|
| 1B300.30V | B300 SXM6 262GB | 1 | 30 | 275 | 262 | 1.8 TB/s | $4.95/h | N/A | $1.24/h |
| 2B300.60V | B300 SXM6 262GB | 2 | 60 | 550 | 525 | 1.8 TB/s | $9.90/h | N/A | $2.48/h |
| 4B300.120V | B300 SXM6 262GB | 4 | 120 | 1100 | 1050 | 1.8 TB/s | $19.80/h | N/A | $4.95/h |
| 8B300.240V | B300 SXM6 262GB | 8 | 240 | 2200 | 2100 | 1.8 TB/s | $39.60/h | N/A | $9.90/h |
DataCrunch instances
Where speed meets simplicity in GPU solutionsCustomer feedback
What they say about us...
-
Having direct contact between our engineering teams enables us to move incredibly fast. Being able to deploy any model at scale is exactly what we need in this fast moving industry. DataCrunch enables us to deploy custom models quickly and effortlessly.
Iván de Prado Head of AI at Freepik -
From deployment to training, our entire language model journey was powered by DataCrunch's clusters. Their high-performance servers and storage solutions allowed us to run smooth operations and maximum uptime, and to to focus on achieving exceptional results without worrying about hardware issues.
José Pombal AI Research Scientist at Unbabel -
DataCrunch powers our entire monitoring and security infrastructure with exceptional reliability. We also enforce firewall restrictions to protect against unauthorized access. Thanks to DataCrunch, our training clusters run smoothly and securely.
Nicola Sosio ML Engineer at Prem AI -
We needed production-grade reliability with pricing that made sense for a startup. DataCrunch hit that sweet spot.
Lars Vagnes Founder & CEO
-
Having direct contact between our engineering teams enables us to move incredibly fast. Being able to deploy any model at scale is exactly what we need in this fast moving industry. DataCrunch enables us to deploy custom models quickly and effortlessly.
Iván de Prado Head of AI at Freepik -
From deployment to training, our entire language model journey was powered by DataCrunch's clusters. Their high-performance servers and storage solutions allowed us to run smooth operations and maximum uptime, and to to focus on achieving exceptional results without worrying about hardware issues.
José Pombal AI Research Scientist at Unbabel -
DataCrunch powers our entire monitoring and security infrastructure with exceptional reliability. We also enforce firewall restrictions to protect against unauthorized access. Thanks to DataCrunch, our training clusters run smoothly and securely.
Nicola Sosio ML Engineer at Prem AI -
We needed production-grade reliability with pricing that made sense for a startup. DataCrunch hit that sweet spot.
Lars Vagnes Founder & CEO