Blog

Need storage too? We've got you covered.

December 19, 2023

Many of VALDI’s customers leverage our on-demand High-Performance Computing (HPC) GPUs for the training and inference of Large Language Models (LLMs). Our GPUs including A100s, A6000s, L40s, 4090s and more are accessible through Virtual Machines (VMs). The provisioning time for a VM varies based on configuration factors like RAM and storage, ranging from a few seconds to a few minutes. VALDI now provides users with real-time updates on GPU provisioning status, enhancing the user experience and ensuring transparency in the process.

VALDI ensures cost efficiency for its users by billing only for the time the GPU is in the ‘Running’ or ‘Stopped’ state. Billing is pro-rated in seconds, reflecting our commitment to fair and transparent pricing.

Copyright © 2024 VALDI. All rights reserved.