π§ Installation#
FastVideo currently only supports Linux and NVIDIA CUDA GPUs.
Requirements#
OS: Linux
Python: 3.10-3.12
CUDA 12.4
At least 1 NVIDIA GPU
Set up using Python#
Create a new Python environment#
Conda#
You can create a new python environment using Conda
1. Install Miniconda (if not already installed)#
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh
source ~/.bashrc
2. Create and activate a Conda environment for FastVideo#
# (Recommended) Create a new conda environment.
conda create -n fastvideo python=3.12 -y
conda activate fastvideo
Note
PyTorch has deprecated the conda release channel. If you use conda
, please only use it to create Python environment rather than installing packages.
uv#
Tip
We highly recommend using uv
to install FastVideo. In our experience, uv
speeds up installation by at least 3x.
Or you can create a new Python environment using uv, a very fast Python environment manager. Please follow the documentation to install uv
. After installing uv
, you can create a new Python environment using the following command:
# (Recommended) Create a new uv environment. Use `--seed` to install `pip` and `setuptools` in the environment.
uv venv --python 3.12 --seed
source .venv/bin/activate
Installation#
pip install fastvideo
# or if you are using uv
uv pip install fastvideo
Also optionally install flash-attn:
pip install flash-attn==2.7.4.post1 --no-build-isolation
Installation from Source#
1. Clone the FastVideo repository#
git clone https://github.com/hao-ai-lab/FastVideo.git && cd FastVideo
2. Install FastVideo#
Basic installation:
pip install -e .
# or if you are using uv
uv pip install -e .
Optional Dependencies#
Flash Attention#
pip install flash-attn==2.7.4.post1 --no-build-isolation
Set up using Docker#
We also have prebuilt docker images with FastVideo dependencies pre-installed: Docker Images
Development Environment Setup#
If youβre planning to contribute to FastVideo please see the following page: Contributor Guide
Hardware Requirements#
For Basic Inference#
NVIDIA GPU with CUDA 12.4 support
For Lora Finetuning#
40GB GPU memory each for 2 GPUs with lora
30GB GPU memory each for 2 GPUs with CPU offload and lora
For Full Finetuning/Distillation#
Multiple high-memory GPUs recommended (e.g., H100)
Troubleshooting#
If you encounter any issues during installation, please open an issue on our GitHub repository.
You can also join our Slack community for additional support.