fastvideo.pipelines.pipeline_batch_info
#
Data structures for functional pipeline processing.
This module defines the dataclasses used to pass state between pipeline components in a functional manner, reducing the need for explicit parameter passing.
Module Contents#
Classes#
Complete state passed through the pipeline execution. |
|
Simple approach using OrderedDict to track stage metrics. |
|
API#
- class fastvideo.pipelines.pipeline_batch_info.ForwardBatch[source]#
Complete state passed through the pipeline execution.
This dataclass contains all information needed during the diffusion pipeline execution, allowing methods to update specific components without needing to manage numerous individual parameters.
- clip_embedding_neg: list[torch.Tensor] | None[source]#
None
- clip_embedding_pos: list[torch.Tensor] | None[source]#
None
- generator: torch.Generator | list[torch.Generator] | None[source]#
None
- image_embeds: list[torch.Tensor][source]#
‘field(…)’
- image_latent: torch.Tensor | None[source]#
None
- latents: torch.Tensor | None[source]#
None
- logging_info: fastvideo.pipelines.pipeline_batch_info.PipelineLoggingInfo[source]#
‘field(…)’
- negative_attention_mask: list[torch.Tensor] | None[source]#
None
- negative_prompt_embeds: list[torch.Tensor] | None[source]#
None
- noise_pred: torch.Tensor | None[source]#
None
- pil_image: torch.Tensor | PIL.Image.Image | None[source]#
None
- preprocessed_image: torch.Tensor | None[source]#
None
- prompt_attention_mask: list[torch.Tensor] | None[source]#
None
- prompt_embeds: list[torch.Tensor][source]#
‘field(…)’
- raw_latent_shape: torch.Tensor | None[source]#
None
- teacache_params: fastvideo.configs.sample.teacache.TeaCacheParams | fastvideo.configs.sample.teacache.WanTeaCacheParams | None[source]#
None
- timesteps: torch.Tensor | None[source]#
None
- class fastvideo.pipelines.pipeline_batch_info.PipelineLoggingInfo[source]#
Simple approach using OrderedDict to track stage metrics.
Initialization
- add_stage_execution_time(stage_name: str, execution_time: float)[source]#
Add execution time for a stage.
- class fastvideo.pipelines.pipeline_batch_info.TrainingBatch[source]#
-
- encoder_attention_mask: torch.Tensor | None[source]#
None
- encoder_attention_mask_neg: torch.Tensor | None[source]#
None
None
None
- image_embeds: torch.Tensor | None[source]#
None
- image_latents: torch.Tensor | None[source]#
None
- latents: torch.Tensor | None[source]#
None
- loss: torch.Tensor | None[source]#
None
- mask_lat_size: torch.Tensor | None[source]#
None
- noise: torch.Tensor | None[source]#
None
- noise_latents: torch.Tensor | None[source]#
None
- noisy_model_input: torch.Tensor | None[source]#
None
- preprocessed_image: torch.Tensor | None[source]#
None
- raw_latent_shape: torch.Tensor | None[source]#
None
- sigmas: torch.Tensor | None[source]#
None
- timesteps: torch.Tensor | None[source]#
None