High availability Machine learning projects have highly dynamic resource requirements. Experimenting with different settings in your models means you might need hundreds of servers at the same time while on downtime these are not necessary. Deploying models as an API or as a streaming API requires dynamic resources based on the number of requests that are coming in. On the other hand, if a server crashes it is essential that predictions can still be made or that a batch job can be started again. Automatically scaling these servers or restarting crashed servers manages costs while ensuring a certain level of speed and reliability. Because of the Kubernetes ecosystem that Cubonacci is built on, these high availability requirements can be taken care of.