Kubetorch is the easiest way to execute ML workloads on Kubernetes at any scale. Simply write regular, undecorated Python programs, define the compute resources and environment you need, and dispatch them to run on your remote cluster with .to()
or with a decorator and running kubetorch deploy
.
Kubetorch is a generational improvement on existing systems, including Kubeflow or custom CD applications.
In the examples, you will see a range of ML applications from training to inference, hyperparameter optimization, and batch data processing. We have many other examples, just send us a ping if you'd like to see anything specific!
Kubetorch is deployed onto your own Kubernetes clusters via Helm chart, and any end users (or systems) with a kubeconfig can use the Kubetorch Python client to interact with powerful remote compute from any Python interpreter. If you do not currently use Kubernetes today, we have Terraform examples that provide reasonable defaults for EKS/GKE/AKS.
We are currently under a private beta. If you are interested in trying it out, shoot us a quick note at support@run.house and we will share the required deployment resources with you.