video_generator
¶
VideoGenerator module for FastVideo.
This module provides a consolidated interface for generating videos using diffusion models.
Classes¶
fastvideo.entrypoints.video_generator.VideoGenerator
¶
VideoGenerator(fastvideo_args: FastVideoArgs, executor_class: type[Executor], log_stats: bool)
A unified class for generating videos using diffusion models.
This class provides a simple interface for video generation with rich customization options, similar to popular frameworks like HF Diffusers.
Initialize the video generator.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fastvideo_args
|
FastVideoArgs
|
The inference arguments |
required |
executor_class
|
type[Executor]
|
The executor class to use for inference |
required |
Source code in fastvideo/entrypoints/video_generator.py
Functions¶
fastvideo.entrypoints.video_generator.VideoGenerator.from_fastvideo_args
classmethod
¶
from_fastvideo_args(fastvideo_args: FastVideoArgs) -> VideoGenerator
Create a video generator with the specified arguments.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fastvideo_args
|
FastVideoArgs
|
The inference arguments |
required |
Returns:
| Type | Description |
|---|---|
VideoGenerator
|
The created video generator |
Source code in fastvideo/entrypoints/video_generator.py
fastvideo.entrypoints.video_generator.VideoGenerator.from_pretrained
classmethod
¶
from_pretrained(model_path: str, device: str | None = None, torch_dtype: dtype | None = None, **kwargs) -> VideoGenerator
Create a video generator from a pretrained model.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_path
|
str
|
Path or identifier for the pretrained model |
required |
device
|
str | None
|
Device to load the model on (e.g., "cuda", "cuda:0", "cpu") |
None
|
torch_dtype
|
dtype | None
|
Data type for model weights (e.g., torch.float16) |
None
|
pipeline_config
|
Pipeline config to use for inference |
required | |
**kwargs
|
Additional arguments to customize model loading, set any FastVideoArgs or PipelineConfig attributes here. |
{}
|
Returns:
| Type | Description |
|---|---|
VideoGenerator
|
The created video generator |
Priority level: Default pipeline config < User's pipeline config < User's kwargs
Source code in fastvideo/entrypoints/video_generator.py
fastvideo.entrypoints.video_generator.VideoGenerator.generate_video
¶
generate_video(prompt: str | None = None, sampling_param: SamplingParam | None = None, **kwargs) -> dict[str, Any] | list[ndarray] | list[dict[str, Any]]
Generate a video based on the given prompt.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prompt
|
str | None
|
The prompt to use for generation (optional if prompt_txt is provided) |
None
|
negative_prompt
|
The negative prompt to use (overrides the one in fastvideo_args) |
required | |
output_path
|
Path to save the video (overrides the one in fastvideo_args) |
required | |
prompt_path
|
Path to prompt file |
required | |
save_video
|
Whether to save the video to disk |
required | |
return_frames
|
Whether to return the raw frames |
required | |
num_inference_steps
|
Number of denoising steps (overrides fastvideo_args) |
required | |
guidance_scale
|
Classifier-free guidance scale (overrides fastvideo_args) |
required | |
num_frames
|
Number of frames to generate (overrides fastvideo_args) |
required | |
height
|
Height of generated video (overrides fastvideo_args) |
required | |
width
|
Width of generated video (overrides fastvideo_args) |
required | |
fps
|
Frames per second for saved video (overrides fastvideo_args) |
required | |
seed
|
Random seed for generation (overrides fastvideo_args) |
required | |
callback
|
Callback function called after each step |
required | |
callback_steps
|
Number of steps between each callback |
required |
Returns:
| Type | Description |
|---|---|
dict[str, Any] | list[ndarray] | list[dict[str, Any]]
|
Either the output dictionary, list of frames, or list of results for batch processing |
Source code in fastvideo/entrypoints/video_generator.py
101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 | |
fastvideo.entrypoints.video_generator.VideoGenerator.shutdown
¶
fastvideo.entrypoints.video_generator.VideoGenerator.unmerge_lora_weights
¶
Use unmerged weights for inference to produce videos that align with validation videos generated during training.