All systems are operational, no maintenance is planned.
During Q4 2020, we were testing 'hibernation', a function which allowed users to store their server image, enabling the user to resume work later without having to keep the server online. While the proof of concept mostly worked, it was far from ideal, mainly due to the lack of an on-site storage solution and the fact that the images are stored on NVMe drives contained in the server on which the instances run. This made the process lengthy and prone to errors.
When we experienced our first unintended data loss, we shut down the beta feature. We already had all the needed equipment on order to move to an entirely different storage setup. Due to global shortages, it took until early March 2021 to start testing the new data management setup.
The new setup will allow the user to have data on a high performance NVMe cluster and/or a HDD cluster. By default, all data is kept in 3 duplicates for redundancy and performance. This brings us the flexibility to deliver exiting new features such as:
To enable networked storage, we connected our entire infrastructure to a 50Gbit networking backbone. While having user instance data on a local drive simplified our initial infrastructure, the network storage cluster in early testing proves to be faster than the onboard NVMe drive in pretty much any metric. It even outperforms any cloud provider we compared with in terms of sequential bandwidth and delivers 30k-40k IOPS!
We are finalizing the move to networked storage, we expect to launch it by the end of March.
We have a fully functioning API in public beta. Try it out! You can create credentials and find a link to the docs on the account info tab.
We are no longer expanding on V100 servers. While the V100 still holds up as an excellent GPU even in 2021, we are moving to the latest gen gear. We are mainly expanding into 2 GPU instance types; the RTX A6000 (GA102, 48GB VRAM, 300W) and the A100 (80GB VRAM, SXM4/NVLink, 400W).
Our A100 80GB cloud instances are the fastest GPU cloud instances you can find!
Both GPUs will be available in 1, 2, 4 or 8 GPU instances, the servers are currently being tested.
Our main focus is you! Tell us what makes your life easier and how we can save your precious time! ie.