fastvideo.v1.pipelines.pipeline_batch_info#

Data structures for functional pipeline processing.

This module defines the dataclasses used to pass state between pipeline components in a functional manner, reducing the need for explicit parameter passing.

Module Contents#

Classes#

ForwardBatch

Complete state passed through the pipeline execution.

API#

class fastvideo.v1.pipelines.pipeline_batch_info.ForwardBatch[source]#

Complete state passed through the pipeline execution.

This dataclass contains all information needed during the diffusion pipeline execution, allowing methods to update specific components without needing to manage numerous individual parameters.

batch_size: Optional[int][source]#

None

clip_embedding_neg: Optional[List[torch.Tensor]][source]#

None

clip_embedding_pos: Optional[List[torch.Tensor]][source]#

None

data_type: str[source]#

None

do_classifier_free_guidance: bool[source]#

False

enable_teacache: bool[source]#

False

eta: float[source]#

0.0

extra: Dict[str, Any][source]#

‘field(…)’

extra_step_kwargs: Dict[str, Any][source]#

‘field(…)’

fps: Optional[int][source]#

None

generator: Optional[Union[torch.Generator, List[torch.Generator]]][source]#

None

guidance_rescale: float[source]#

0.0

guidance_scale: float[source]#

1.0

height: Optional[int][source]#

None

height_latents: Optional[int][source]#

None

image_embeds: List[torch.Tensor][source]#

‘field(…)’

image_latent: Optional[torch.Tensor][source]#

None

image_path: Optional[str][source]#

None

is_prompt_processed: bool[source]#

False

latents: Optional[torch.Tensor][source]#

None

max_sequence_length: Optional[int][source]#

None

modules: Dict[str, Any][source]#

‘field(…)’

n_tokens: Optional[int][source]#

None

negative_attention_mask: Optional[List[torch.Tensor]][source]#

None

negative_prompt: Optional[Union[str, List[str]]][source]#

None

negative_prompt_embeds: Optional[List[torch.Tensor]][source]#

None

noise_pred: Optional[torch.Tensor][source]#

None

num_frames: int[source]#

1

num_frames_round_down: bool[source]#

False

num_inference_steps: int[source]#

50

num_videos_per_prompt: int[source]#

1

output: Any[source]#

None

output_path: str[source]#

‘outputs/’

prompt: Optional[Union[str, List[str]]][source]#

None

prompt_attention_mask: Optional[List[torch.Tensor]][source]#

None

prompt_embeds: List[torch.Tensor][source]#

‘field(…)’

prompt_path: Optional[str][source]#

None

prompt_template: Optional[Dict[str, Any]][source]#

None

return_frames: bool[source]#

False

save_video: bool[source]#

True

seed: Optional[int][source]#

None

seeds: Optional[List[int]][source]#

None

sigmas: Optional[List[float]][source]#

None

step_index: Optional[int][source]#

None

teacache_params: Optional[fastvideo.v1.configs.sample.teacache.TeaCacheParams | fastvideo.v1.configs.sample.teacache.WanTeaCacheParams][source]#

None

timestep: Optional[Union[torch.Tensor, float, int]][source]#

None

timesteps: Optional[torch.Tensor][source]#

None

width: Optional[int][source]#

None

width_latents: Optional[int][source]#

None