Run AI models with
a single YAML
VESSL Run brings a unified YAML interface for traning, fine-tuning, and scaling
your own Stable Diffusion, LLaMA, and more — on any cloud, with a single command.


Run any models
in seconds
VESSL’s unified YAML definition lets you run any models from Generative AI models to award-winning academic papers all with a single command.
The latest open-source models
that just works
VESSL Run simplifies the manual Python dependency and CUDA configuration into a single YAML definition so you can spend more time iterating the models. Start training by pointing to GitHub repositories, our custom Docker images, and your datasets.

1
2
3
4
5
6
7
8
9
10
11
12
name: stable-diffusion
resources:
cluster: aws
accelerators: V100:4
run:
- workdir: /root/Stable-Diffusion
command: |
python scripts/stable_txt2img.py

Launch projects from Git repos
Use public, open-source models as a starting point for your projects simply by pointing to a GitHub repository.

Bring your own datasets
Mount your datasets from the cloud or on-prem storage and fine-tune your model with the same YAML definition.

Custom Docker Images
Our pre-built Docker images remove the need to manually install and configure environment dependencies.
Ready for scale
Fine-tune and deploy models at scale on any cloud without worrying
about the complex compute backends and system details.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
name: stable-diffusion
volumes:
/input/dataset: s3://public-bucket/dreambooth/stable_diffusion/
/root/Stable-Diffusion: git://github.com/XavierXiao/ Dreambooth-Stable-Diffusion
image: vessl-ai/ngc-20.10:v1
run:
- workdir: /root/Stable-Diffusion
command: |
conda env create -f environment.yaml
conda activate ldm
mkdir data/
env:
class_word: vessl_logo
prompt: "a photo of vessl logo"

Train on multiple clouds
Launch your Run on any cloud with managed spot instances, elastic scaling, and resource monitoring.




Bring your own GPUs
Set up unlimited number of Kubernetes-backed on-prem GPU clusters with a single command.
Automatic scaling
- GPU Optimization
- Batch scheduling
- Distributed training
- Termination protection
- Pay by the second
Go beyond training
Add snippets to the existing YAML definition to deploy your fine-tuned model as a micro app or define them as individual steps for a CI/CD pipeline.

Create your own micro AI app
Open up a port to your Run and integrate tools like Streamlit and Gradio to create interactive apps.
Automate with CI/CD
Orchestrate every step of ML from data ingestion to model deployment with a single YAML file.
Get started in
under a minute
Get started right from your terminal and train your first nanoGPT with free VESSL Run credits.
Sign up today and receive 5 hours of high-performance GPU instances.
