Get the GPUs you need, when you need them.
Need Compute? We find the lowest cost, 100% renewable energy powered, available GPUs that meet your specific requirements.
The easiest way to find GPU rentals.
Zero emissions data centers, for a fraction of the cost
We work with Data Centers that are 100% renewable energy powered all over the world who have underutilized GPU capacity. Instead of having it go to waste, we send it to you at a fraction of the cost.
AI training, accelerated.
Don’t let costs hold back your training. We have access to both independent and major data centers to make sure you get the most GPU access at the lowest cost.
Your exact requirements, met.
GPUs are difficult to come by, we notify you as soon as the right chip becomes available for hourly rental. Work with us to train your LLMs, easily scale up and down your GPU capacity.
Available GPU aggregator
You can soon use our powerful GPU aggregator and search tools to instantly find the right GPUs for you.
"I've been using Build AI to notify me of available GPUs for the last few months and its saved us tens of thousands and greatly reduced our computing emissions footprint. No easy feat considering we have highly specific requirements."
FAQs
Build AI is a cloud computing and aggregation service focused on lowering the price of compute-intensive workloads. Our software allows anyone to easily become a host by renting out their (GPU) hardware. Our web search interface allows users to quickly find the best deals for compute according to their specific requirements.
Hosts list their machines, configure prices, and set any default jobs. Clients then find suitable machines using our flexible search interface, rent their desired machines, and finally run commands or start SSH sessions with a few clicks.
Build AI provides a simple interface to rent powerful machines at the best possible prices, reducing GPU cloud computing costs by ~3x to 5x. We are helping the millions of underutilized GPUs around the world enter the cloud computing market for the first time.
By filling out the survey, you will be put at the top of the waitlist when new GPUs become available. If you just enter your email (not the survey), we’ll notify you when chips come online, but will give higher priority to the folks who fill out the survey with specifications regarding the chips they’re interested in renting.
We will contact you via the email address you enter above. We’ll share details around the type of chip, storage, networking, security, and price so then you can make a decision to either rent or not.
We’re working hard to procure GPUs for our initial customers. We believe that we’ll be able to start provisioning GPUs for customers to rent in a few weeks. By filling out the survey, you’ll help us get a better understanding of the chips our customers are looking to rent.
There are separate prices and charges for: Active rental (GPU) costs, Storage costs, and Bandwidth costs. These details will all be provided to you via email once the GPUs become available.
If the terms meet your needs, you will be charged the base active rental cost for every second your instance is in the active/connected state. You are charged the storage cost (which depends on the size of your storage allocation) for every second your instance exists and is online, regardless of what state it is in (e.g. active, inactive, loading, etc.). You are charged bandwidth prices for every byte sent or received to or from the instance, regardless of what state it is in. The prices for base rental, storage, and bandwidth vary considerably from machine to machine, so make sure to check them. You are not charged active rental or storage costs for instances that are currently offline.
The other webpage that you might have seen, linked here, is a separate business model that we’re pursuing in parallel. The driving thesis is that training an AI model looks at lot different than most other data center workloads: 1) high energy consumption, 2) latency isn’t as much of a factor, and 3) neither is uptime and 24/7 reliability (since models can be paused at their checkpoints).
This insight has us planning to deploy (modular) data centers in remote parts of the country (e.g. W. Texas, N. Dakota) with cheap/excess renewable energy to power them. We’ll operate our servers dynamically, so when power prices are high and the grid is dirty, we'll pause the models being trained at their checkpoints.
This solution will drastically lower the cost and environmental impact of training AI models.