🛠️ Contributing to FastVideo¶
Thank you for your interest in contributing to FastVideo. We want the process to be smooth and beginner‑friendly, whether you are adding a new pipeline, improving performance, or fixing a bug.
Quick prerequisites¶
- OS: Linux is the primary development target (WSL can work).
- GPU: NVIDIA GPU recommended for inference and training workflows.
- CUDA: Use a recent CUDA 12.x toolchain (see the installation guide for the current recommendation).
For a full install checklist, see docs/getting_started/installation/gpu.md.
Local development (Conda + editable install)¶
Install Miniconda:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh
source ~/.bashrc
Create and activate a Conda environment:
Install uv (optional, but recommended):
Clone the repo:
Install FastVideo in editable mode and set up hooks:
uv pip install -e .[dev]
# Optional: FlashAttention (builds native kernels)
uv pip install flash-attn --no-build-isolation
# Linting, formatting, static typing
pre-commit install --hook-type pre-commit --hook-type commit-msg
pre-commit run --all-files
# Unit tests
pytest tests/
If you are on a Hopper GPU, installing FlashAttention 3 can improve
performance (see docs/inference/optimizations.md).
Docker development (optional)¶
If you prefer a containerized environment, use the dev image documented in
docs/contributing/developer_env/docker.md.
Testing¶
See the Testing Guide for how to add and run tests in FastVideo.
Attention backend development¶
If you are adding a new attention kernel or backend, follow Attention Backend Development.
Contributing with coding agents¶
For a step‑by‑step workflow on adding pipelines or components with coding
agents, see docs/contributing/coding_agents.md.