fastvideo.v1.entrypoints.video_generator#

VideoGenerator module for FastVideo.

This module provides a consolidated interface for generating videos using diffusion models.

Module Contents#

Classes#

VideoGenerator

A unified class for generating videos using diffusion models.

Data#

API#

class fastvideo.v1.entrypoints.video_generator.VideoGenerator(fastvideo_args: fastvideo.v1.fastvideo_args.FastVideoArgs, executor_class: type[fastvideo.v1.worker.executor.Executor], log_stats: bool)[source]#

A unified class for generating videos using diffusion models.

This class provides a simple interface for video generation with rich customization options, similar to popular frameworks like HF Diffusers.

Initialization

Initialize the video generator.

Parameters:
  • pipeline – The pipeline to use for inference

  • fastvideo_args – The inference arguments

classmethod from_fastvideo_args(fastvideo_args: fastvideo.v1.fastvideo_args.FastVideoArgs) fastvideo.v1.entrypoints.video_generator.VideoGenerator[source]#

Create a video generator with the specified arguments.

Parameters:

fastvideo_args – The inference arguments

Returns:

The created video generator

classmethod from_pretrained(model_path: str, device: Optional[str] = None, torch_dtype: Optional[torch.dtype] = None, pipeline_config: Optional[Union[str | fastvideo.v1.configs.pipelines.PipelineConfig]] = None, **kwargs) fastvideo.v1.entrypoints.video_generator.VideoGenerator[source]#

Create a video generator from a pretrained model.

Parameters:
  • model_path – Path or identifier for the pretrained model

  • device – Device to load the model on (e.g., β€œcuda”, β€œcuda:0”, β€œcpu”)

  • torch_dtype – Data type for model weights (e.g., torch.float16)

  • **kwargs – Additional arguments to customize model loading

Returns:

The created video generator

Priority level: Default pipeline config < User’s pipeline config < User’s kwargs

generate_video(prompt: str, sampling_param: Optional[fastvideo.v1.configs.sample.SamplingParam] = None, **kwargs) Union[Dict[str, Any], List[numpy.ndarray]][source]#

Generate a video based on the given prompt.

Parameters:
  • prompt – The prompt to use for generation

  • negative_prompt – The negative prompt to use (overrides the one in fastvideo_args)

  • output_path – Path to save the video (overrides the one in fastvideo_args)

  • save_video – Whether to save the video to disk

  • return_frames – Whether to return the raw frames

  • num_inference_steps – Number of denoising steps (overrides fastvideo_args)

  • guidance_scale – Classifier-free guidance scale (overrides fastvideo_args)

  • num_frames – Number of frames to generate (overrides fastvideo_args)

  • height – Height of generated video (overrides fastvideo_args)

  • width – Width of generated video (overrides fastvideo_args)

  • fps – Frames per second for saved video (overrides fastvideo_args)

  • seed – Random seed for generation (overrides fastvideo_args)

  • callback – Callback function called after each step

  • callback_steps – Number of steps between each callback

Returns:

Either the output dictionary or the list of frames depending on return_frames

shutdown()[source]#

Shutdown the video generator.

fastvideo.v1.entrypoints.video_generator.logger[source]#

β€˜init_logger(…)’