V1 API#
FastVideo’s V1 API provides a streamlined interface for video generation tasks with powerful customization options. This page documents the primary components of the API.
Video Generator#
This class will be the primary Python API for generating videos and images.
A unified class for generating videos using diffusion models. |
- class VideoGenerator(fastvideo_args: fastvideo.v1.fastvideo_args.FastVideoArgs, executor_class: type[fastvideo.v1.worker.executor.Executor], log_stats: bool)#
A unified class for generating videos using diffusion models.
This class provides a simple interface for video generation with rich customization options, similar to popular frameworks like HF Diffusers.
VideoGenerator.from_pretrained()
should be the primary way of creating a new video generator.- classmethod from_pretrained(model_path: str, device: Optional[str] = None, torch_dtype: Optional[torch.dtype] = None, pipeline_config: Optional[Union[str | fastvideo.v1.configs.pipelines.PipelineConfig]] = None, **kwargs) fastvideo.v1.entrypoints.video_generator.VideoGenerator #
Create a video generator from a pretrained model.
- Parameters:
model_path – Path or identifier for the pretrained model
device – Device to load the model on (e.g., “cuda”, “cuda:0”, “cpu”)
torch_dtype – Data type for model weights (e.g., torch.float16)
**kwargs – Additional arguments to customize model loading
- Returns:
The created video generator
Priority level: Default pipeline config < User’s pipeline config < User’s kwargs
Configuring FastVideo#
The follow two classes PipelineConfig
and SamplingParam
are used to configure initialization and sampling parameters, respectively.
PipelineConfig#
SamplingParam#
- class SamplingParam#
Sampling parameters for video generation.
- classmethod from_pretrained(model_path: str) fastvideo.v1.configs.sample.base.SamplingParam #