October was a landmark month for Verda (formerly DataCrunch) - one defined by innovation, expansion, and momentum. We deployed one of the first NVIDIA GB300 NVL72 units in Europe alongside major feature releases in our platform and the name change. Our team continues to grow faster than ever, with more job openings available for those who is passionate about shaping the future of AI infrastructure.
Name Change
We’re excited to announce that DataCrunch is now Verda. The new name reflects our commitment to sustainable innovation, technological integrity, and building a European hyperscaler that prioritizes data protection and renewable energy. Our services and vision remain the same as we continue expanding our cloud platform and GPU infrastructure to power the next generation of AI.
New Deployments
We’ve expanded our compute arsenal! Meet the B300, GB300 NVL 72 and RTX PRO 6000 built on NVIDA's Blackwell architecture. Designed for the next generation of AI workloads, these deployments combine raw performance, efficiency, and flexibility to handle everything from large-scale training to real-time inference. This launch also marks an important step in expanding our long-term collaboration with NVIDIA.
B300
We’re excited to announce that the NVIDIA B300 is now moving into production at Verda. Built on the Blackwell Ultra architecture, the B300 delivers exceptional performance, massive memory bandwidth, and energy efficiency for large-scale AI workloads. This marks another step forward in expanding our high-performance infrastructure for next-generation training and inference.
GB300 NVL72
We’re proud to be the first in Finland to bring the GB300 NVL72 into production. Powered by NVIDIA's Blackwell architecture, the GB300 ushers in a new era of efficiency and scalability for large-scale AI training and inference. This milestone marks a significant leap forward for Verda as we continue to push the limits of high-performance AI infrastructure.
RTX PRO 6000
The RTX PRO 6000 brings exceptional performance for demanding AI, simulation, and media-generation tasks. Each GPU features 24,064 CUDA cores, advanced Tensor and RT cores, and 96 GB of ultra fast GDDR7 memory, making it ideal for large scale inference and high-fidelity model development.
With ability to scale up to 8x Instances, it delivers the perfect balance of compute power, efficiency, and scalability. A wise choice for professionals who need top tier performance without compromise.
New Features
This month we extended the capabilities of our one-stop cloud offering, enabling you to run a larger variety of AI workloads in one platform. The highlight is our Instant B200 Clusters, enabling rapid provisioning for high-performance workloads with zero setup delays. We’ve also launched Batch Jobs, allowing users to automate large-scale tasks, optimize resource use, and simplify complex workflows.
Instant B200 Clusters

We’re excited to introduce Instant B200 Clusters, built to deliver massive AI training power in just minutes. Each node packs 8× B200 SXM6 GPUs, 1.44 TB of GPU memory, 240 CPU cores, and up to 3,200 Gbit/s InfiniBand for ultra-fast communication.
Spin up clusters instantly, scale effortlessly, and pay only for what you use with pay-as-you-go pricing starting at $3.59 per GPU per hour. Perfect for large-scale training, inference, and world simulation workloads.
Batch Jobs
We’re introducing Batch Jobs, a new way to execute long-running workloads effortlessly on Verda. Perfect for fine-tuning, preprocessing, video or audio processing, and large-scale inference, Batch Jobs give you the flexibility of containers with zero manual management.
Each request runs in its own isolated job, automatically scaling replicas as demand grows and scaling back to zero when the work is done, so you only use what you need. You can run the Jobs by using your existing container images, environment variables, volumes, and CPU/GPU configurations, just like regular containers.
Note: Only available on request. Contact us via chat support for access
Events
From global stages to local meetups, Verda showed up big this month: new tech, new ideas, and endless energy. Here’s where we made waves in October.
AI Engine Hackathon in Warsaw 🇵🇱
Verda was proud to be a tech partner at the AI Engine Hackathon in Warsaw, where we provided GPU access to over 100 top engineers and AI talents driving the next wave of innovation. This collaboration marks one of the first steps in strengthening our commitment to supporting Europe’s growing AI ecosystem.
PyTorch Conference Afters in San Francisco 🇺🇸
We recently hosted a PyTorch community event in San Francisco that gathered more than 80 AI engineers and researchers to explore the practical applications of Blackwell architecture. Our ML Engineer Paul Chang talked about training the winning submissions to the 1X World Challenge on Instant B200 Clusters, whereas Erik Schultheis talked about how to train quantized LLMs efficiently on consumer GPUs.

EU's Last Hope for Sovereign AI 🇧🇪
We hosted AI Talks with Hugging Face, ML6, and Conveo in Ghent during Tectonic, bringing together founders, engineers, and policymakers to discuss Europe’s role in the global AI landscape. Our CEO, Ruben Bryon had a fireside chat covering topics like AI sovereignty, open vs closed ecosystems, and the future of regulation in Europe.
Aalto Talent Expo 🇫🇮
We took part in the Aalto Talent Expo on November 6 at Dipoli. It was a great opportunity to connect with the next generation of engineers and AI talent, share insights about Verda’s mission, and introduce students to career opportunities in high-performance computing and cloud infrastructure. We’re excited to see so much curiosity and passion for the future of AI among Finland’s brightest minds.
Upcoming Events
In the coming month, we have a lot more events in store for you. Let's gear up for more AI infrastructure deep dives, hands-on sessions, and community meetups across major tech hubs.
Check out our upcoming events in November and December. Save your seat and bring along your friends and teammates.
Events
- What is takes to be AI Native - November 11th in Helsinki, Finland 🇫🇮
- Finland Agentics x DataCrunch meetup #3 - Slush Edition - November 18th in Helsinki, Finland 🇫🇮
- MariaLAN Party - November 28-30th in Helsinki, Finland 🇫🇮
- EurIPS - December 3-5th in Copenhagen, Denmark 🇩🇰 (more information coming soon)
Hackathons
- Agentic AI Hackathon - November 7th in Espoo, Finland 🇫🇮 (sponsor)
- Junction - November 14-16th in Espoo, Finland 🇫🇮 (sponsor)
General News
SemiAnalysis's ClustersMAX™ v2
Verda has once again earned a bronze ranking in SemiAnalysis’s latest GPU Cloud ClusterMAX™ Rating System, recognizing our Instant Clusters for performance, scalability, and usability. The updated evaluation included over 200 cloud providers, highlighting Verda’s strong showing among global competitors. Our B200-powered Instant Clusters impressed reviewers with rapid provisioning, pay-as-you-go flexibility, and a robust Slurm implementation praised for its completeness. We’re continuing to improve Instant Clusters as we work toward delivering the industry’s leading self-service solution for large-scale AI training.
AI GigaFactories
Over the past few months, we’ve been in active discussions with EU member states, industry leaders, consortia, and supporting organizations about our AI GigaFactories proposal to the European Commission. These conversations highlight the growing momentum behind building a stronger, more independent AI infrastructure ecosystem in Europe.
If you are interested in joining our consortium or discussing adjacent topics, please reach out to Michael Champion, VP of Security, Compliance & Governmental Relations: [email protected]
1X World Model Challenge
We’re proud to share that Team Revontuli, featuring Paul Chang and Riccardo Mereu, ML engineers at Verda, won the latest 1X World Model Challenge - ranking first in both the compression and sampling tasks.
The challenge focused on predicting the future actions of the 1X humanoid robot NEO using visual and state data, pushing the limits of model efficiency and accuracy. The team powered their training workloads with Verda’s new B200 Instant Cluster, which proved essential for rapid iteration and high-performance compute under tight deadlines.
Press Releases
Verda was featured across multiple news outlets this month, highlighting our approach to efficient and sovereign AI. The coverage emphasized our role in advancing high-performance AI infrastructure and our vision for global-scale compute accessibility. These mentions reflect growing recognition of Verda as a key player shaping the future of AI innovation in Europe and beyond.
- De Tijd 🇧🇪🇳🇱
- L'Echo 🇧🇪🇫🇷
- Sifted 🇪🇺
- Talouselämä 🇫🇮
- Ilta-Sanomat 🇫🇮
Job Openings
We’re expanding and looking for talented people to join our team. If you’re excited about AI infrastructure, large-scale systems, and building the future of cloud computing, now is the perfect time to get involved. Explore our open roles and help shape what comes next at Verda:
- GPU Container Expert
- Senior/Principal Site Reliability Engineer
- Junior Hardware Engineer
- Senior Application Security Analyst
- Open Application/Community Talent