basic
¶
Basic inference pipelines for fastvideo.
This package contains basic pipelines for video and image generation.
Modules¶
fastvideo.pipelines.basic.cosmos
¶
Modules¶
fastvideo.pipelines.basic.cosmos.cosmos2_5_pipeline
¶
Cosmos 2.5 pipeline entry (staged pipeline).
Classes¶
fastvideo.pipelines.basic.cosmos.cosmos2_5_pipeline.Cosmos2_5Pipeline
¶Cosmos2_5Pipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
Cosmos 2.5 video generation pipeline.
Source code in fastvideo/pipelines/composed_pipeline_base.py
Functions¶
fastvideo.pipelines.basic.cosmos.cosmos_pipeline
¶
Cosmos video diffusion pipeline implementation.
This module contains an implementation of the Cosmos video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.cosmos.cosmos_pipeline.Cosmos2VideoToWorldPipeline
¶Cosmos2VideoToWorldPipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
Source code in fastvideo/pipelines/composed_pipeline_base.py
fastvideo.pipelines.basic.cosmos.cosmos_pipeline.Cosmos2VideoToWorldPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/cosmos/cosmos_pipeline.py
Functions¶
fastvideo.pipelines.basic.hunyuan
¶
Modules¶
fastvideo.pipelines.basic.hunyuan.hunyuan_pipeline
¶
Hunyuan video diffusion pipeline implementation.
This module contains an implementation of the Hunyuan video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.hunyuan.hunyuan_pipeline.HunyuanVideoPipeline
¶HunyuanVideoPipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
Source code in fastvideo/pipelines/composed_pipeline_base.py
fastvideo.pipelines.basic.hunyuan.hunyuan_pipeline.HunyuanVideoPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/hunyuan/hunyuan_pipeline.py
Functions¶
fastvideo.pipelines.basic.hunyuan15
¶
Modules¶
fastvideo.pipelines.basic.hunyuan15.hunyuan15_pipeline
¶
Hunyuan video diffusion pipeline implementation.
This module contains an implementation of the Hunyuan video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.hunyuan15.hunyuan15_pipeline.HunyuanVideo15Pipeline
¶HunyuanVideo15Pipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
Source code in fastvideo/pipelines/composed_pipeline_base.py
fastvideo.pipelines.basic.hunyuan15.hunyuan15_pipeline.HunyuanVideo15Pipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/hunyuan15/hunyuan15_pipeline.py
Functions¶
fastvideo.pipelines.basic.hyworld
¶
Modules¶
fastvideo.pipelines.basic.hyworld.hyworld_pipeline
¶
HYWorld video diffusion pipeline implementation.
This module contains an implementation of the HYWorld video diffusion pipeline using the modular pipeline architecture with HYWorld-specific denoising stage for chunk-based video generation with context frame selection.
Classes¶
fastvideo.pipelines.basic.hyworld.hyworld_pipeline.HYWorldPipeline
¶HYWorldPipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
HYWorld video diffusion pipeline.
This pipeline implements chunk-based video generation with context frame selection for 3D-aware generation using HYWorldDenoisingStage.
Note: HYWorld only uses a single LLM-based text encoder, unlike SDXL-style dual encoder setups. The text_encoder_2/tokenizer_2 are not used.
Source code in fastvideo/pipelines/composed_pipeline_base.py
fastvideo.pipelines.basic.hyworld.hyworld_pipeline.HYWorldPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with HYWorld-specific denoising stage.
Source code in fastvideo/pipelines/basic/hyworld/hyworld_pipeline.py
Functions¶
fastvideo.pipelines.basic.longcat
¶
LongCat pipeline module.
Classes¶
fastvideo.pipelines.basic.longcat.LongCatImageToVideoPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
LongCat Image-to-Video pipeline.
Generates video from a single input image using Tier 3 I2V conditioning: - Per-frame timestep masking (timestep[:, 0] = 0) - num_cond_latents parameter to transformer - RoPE skipping for conditioning frames - Selective denoising (skip first frame in scheduler)
Source code in fastvideo/pipelines/lora_pipeline.py
125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 | |
Functions¶
fastvideo.pipelines.basic.longcat.LongCatImageToVideoPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up I2V-specific pipeline stages.
Source code in fastvideo/pipelines/basic/longcat/longcat_i2v_pipeline.py
fastvideo.pipelines.basic.longcat.LongCatImageToVideoPipeline.initialize_pipeline
¶initialize_pipeline(fastvideo_args: FastVideoArgs)
Initialize LongCat-specific components.
Source code in fastvideo/pipelines/basic/longcat/longcat_i2v_pipeline.py
fastvideo.pipelines.basic.longcat.LongCatPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
LongCat video diffusion pipeline with LoRA support.
Source code in fastvideo/pipelines/lora_pipeline.py
125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 | |
Functions¶
fastvideo.pipelines.basic.longcat.LongCatPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/longcat/longcat_pipeline.py
fastvideo.pipelines.basic.longcat.LongCatPipeline.initialize_pipeline
¶initialize_pipeline(fastvideo_args: FastVideoArgs)
Initialize LongCat-specific components.
Source code in fastvideo/pipelines/basic/longcat/longcat_pipeline.py
fastvideo.pipelines.basic.longcat.LongCatVideoContinuationPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
LongCat Video Continuation pipeline.
Generates video continuation from multiple conditioning frames using optional KV cache for 2-3x speedup.
Key features: - Takes video input (13+ frames typically) - Encodes conditioning frames via VAE - Optionally pre-computes KV cache for conditioning - Uses cached K/V during denoising for speedup - Concatenates conditioning back after denoising
Source code in fastvideo/pipelines/lora_pipeline.py
125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 | |
Functions¶
fastvideo.pipelines.basic.longcat.LongCatVideoContinuationPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up VC-specific pipeline stages.
Source code in fastvideo/pipelines/basic/longcat/longcat_vc_pipeline.py
fastvideo.pipelines.basic.longcat.LongCatVideoContinuationPipeline.initialize_pipeline
¶initialize_pipeline(fastvideo_args: FastVideoArgs)
Initialize LongCat-specific components.
Source code in fastvideo/pipelines/basic/longcat/longcat_vc_pipeline.py
Modules¶
fastvideo.pipelines.basic.longcat.longcat_i2v_pipeline
¶
LongCat Image-to-Video pipeline implementation.
This module implements I2V (Image-to-Video) generation for LongCat using Tier 3 conditioning with timestep masking, num_cond_latents support, and RoPE skipping.
Supports: - Basic I2V (50 steps, guidance_scale=4.0) - Distilled I2V with LoRA (16 steps, guidance_scale=1.0) - Refinement I2V for 720p upscaling (with refinement LoRA + BSA)
Classes¶
fastvideo.pipelines.basic.longcat.longcat_i2v_pipeline.LongCatImageToVideoPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
LongCat Image-to-Video pipeline.
Generates video from a single input image using Tier 3 I2V conditioning: - Per-frame timestep masking (timestep[:, 0] = 0) - num_cond_latents parameter to transformer - RoPE skipping for conditioning frames - Selective denoising (skip first frame in scheduler)
Source code in fastvideo/pipelines/lora_pipeline.py
125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 | |
fastvideo.pipelines.basic.longcat.longcat_i2v_pipeline.LongCatImageToVideoPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up I2V-specific pipeline stages.
Source code in fastvideo/pipelines/basic/longcat/longcat_i2v_pipeline.py
fastvideo.pipelines.basic.longcat.longcat_i2v_pipeline.LongCatImageToVideoPipeline.initialize_pipeline
¶initialize_pipeline(fastvideo_args: FastVideoArgs)
Initialize LongCat-specific components.
Source code in fastvideo/pipelines/basic/longcat/longcat_i2v_pipeline.py
Functions¶
fastvideo.pipelines.basic.longcat.longcat_pipeline
¶
LongCat video diffusion pipeline implementation.
This module implements the LongCat video diffusion pipeline using FastVideo's modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.longcat.longcat_pipeline.LongCatPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
LongCat video diffusion pipeline with LoRA support.
Source code in fastvideo/pipelines/lora_pipeline.py
125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 | |
fastvideo.pipelines.basic.longcat.longcat_pipeline.LongCatPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/longcat/longcat_pipeline.py
fastvideo.pipelines.basic.longcat.longcat_pipeline.LongCatPipeline.initialize_pipeline
¶initialize_pipeline(fastvideo_args: FastVideoArgs)
Initialize LongCat-specific components.
Source code in fastvideo/pipelines/basic/longcat/longcat_pipeline.py
Functions¶
fastvideo.pipelines.basic.longcat.longcat_vc_pipeline
¶
LongCat Video Continuation (VC) pipeline implementation.
This module implements VC (Video Continuation) generation for LongCat with KV cache optimization for 2-3x speedup.
Supports: - Basic VC (50 steps, guidance_scale=4.0) - Distilled VC with LoRA (16 steps, guidance_scale=1.0) - KV cache for conditioning frames
Classes¶
fastvideo.pipelines.basic.longcat.longcat_vc_pipeline.LongCatVCLatentPreparationStage
¶LongCatVCLatentPreparationStage(scheduler, transformer, use_btchw_layout: bool = False)
Bases: LongCatI2VLatentPreparationStage
Prepare latents with video conditioning for first N frames.
Extends I2V latent preparation to handle video_latent (multiple frames) instead of image_latent (single frame).
Source code in fastvideo/pipelines/stages/latent_preparation.py
fastvideo.pipelines.basic.longcat.longcat_vc_pipeline.LongCatVCLatentPreparationStage.forward
¶Prepare latents with VC conditioning.
Source code in fastvideo/pipelines/basic/longcat/longcat_vc_pipeline.py
fastvideo.pipelines.basic.longcat.longcat_vc_pipeline.LongCatVideoContinuationPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
LongCat Video Continuation pipeline.
Generates video continuation from multiple conditioning frames using optional KV cache for 2-3x speedup.
Key features: - Takes video input (13+ frames typically) - Encodes conditioning frames via VAE - Optionally pre-computes KV cache for conditioning - Uses cached K/V during denoising for speedup - Concatenates conditioning back after denoising
Source code in fastvideo/pipelines/lora_pipeline.py
125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 | |
fastvideo.pipelines.basic.longcat.longcat_vc_pipeline.LongCatVideoContinuationPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up VC-specific pipeline stages.
Source code in fastvideo/pipelines/basic/longcat/longcat_vc_pipeline.py
fastvideo.pipelines.basic.longcat.longcat_vc_pipeline.LongCatVideoContinuationPipeline.initialize_pipeline
¶initialize_pipeline(fastvideo_args: FastVideoArgs)
Initialize LongCat-specific components.
Source code in fastvideo/pipelines/basic/longcat/longcat_vc_pipeline.py
Functions¶
fastvideo.pipelines.basic.ltx2
¶
fastvideo.pipelines.basic.matrixgame
¶
fastvideo.pipelines.basic.stepvideo
¶
Modules¶
fastvideo.pipelines.basic.stepvideo.stepvideo_pipeline
¶
Hunyuan video diffusion pipeline implementation.
This module contains an implementation of the Hunyuan video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.stepvideo.stepvideo_pipeline.StepVideoPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
Source code in fastvideo/pipelines/lora_pipeline.py
125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 | |
fastvideo.pipelines.basic.stepvideo.stepvideo_pipeline.StepVideoPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/stepvideo/stepvideo_pipeline.py
fastvideo.pipelines.basic.stepvideo.stepvideo_pipeline.StepVideoPipeline.initialize_pipeline
¶initialize_pipeline(fastvideo_args: FastVideoArgs)
Initialize the pipeline.
Source code in fastvideo/pipelines/basic/stepvideo/stepvideo_pipeline.py
fastvideo.pipelines.basic.stepvideo.stepvideo_pipeline.StepVideoPipeline.load_modules
¶load_modules(fastvideo_args: FastVideoArgs) -> dict[str, Any]
Load the modules from the config.
Source code in fastvideo/pipelines/basic/stepvideo/stepvideo_pipeline.py
Functions¶
fastvideo.pipelines.basic.turbodiffusion
¶
Classes¶
fastvideo.pipelines.basic.turbodiffusion.TurboDiffusionI2VPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
TurboDiffusion I2V pipeline for 1-4 step image-to-video generation.
Uses RCM scheduler, SLA attention, and dual model switching for high-quality I2V generation.
Source code in fastvideo/pipelines/lora_pipeline.py
125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 | |
Functions¶
fastvideo.pipelines.basic.turbodiffusion.TurboDiffusionI2VPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/turbodiffusion/turbodiffusion_i2v_pipeline.py
fastvideo.pipelines.basic.turbodiffusion.TurboDiffusionPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
TurboDiffusion video pipeline for 1-4 step generation.
Uses RCM scheduler and SLA attention for fast, high-quality video generation.
Source code in fastvideo/pipelines/lora_pipeline.py
125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 | |
Functions¶
fastvideo.pipelines.basic.turbodiffusion.TurboDiffusionPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/turbodiffusion/turbodiffusion_pipeline.py
Modules¶
fastvideo.pipelines.basic.turbodiffusion.turbodiffusion_i2v_pipeline
¶
TurboDiffusion I2V (Image-to-Video) Pipeline Implementation.
This module contains an implementation of the TurboDiffusion I2V pipeline for 1-4 step image-to-video generation using rCM (recurrent Consistency Model) sampling with SLA (Sparse-Linear Attention).
Key differences from T2V: - Uses dual models (high/low noise) with boundary switching - sigma_max=200 (vs 80 for T2V) - Mask conditioning with encoded first frame
Classes¶
fastvideo.pipelines.basic.turbodiffusion.turbodiffusion_i2v_pipeline.TurboDiffusionI2VPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
TurboDiffusion I2V pipeline for 1-4 step image-to-video generation.
Uses RCM scheduler, SLA attention, and dual model switching for high-quality I2V generation.
Source code in fastvideo/pipelines/lora_pipeline.py
125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 | |
fastvideo.pipelines.basic.turbodiffusion.turbodiffusion_i2v_pipeline.TurboDiffusionI2VPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/turbodiffusion/turbodiffusion_i2v_pipeline.py
Functions¶
fastvideo.pipelines.basic.turbodiffusion.turbodiffusion_pipeline
¶
TurboDiffusion Video Pipeline Implementation.
This module contains an implementation of the TurboDiffusion video diffusion pipeline for 1-4 step video generation using rCM (recurrent Consistency Model) sampling with SLA (Sparse-Linear Attention).
Classes¶
fastvideo.pipelines.basic.turbodiffusion.turbodiffusion_pipeline.TurboDiffusionPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
TurboDiffusion video pipeline for 1-4 step generation.
Uses RCM scheduler and SLA attention for fast, high-quality video generation.
Source code in fastvideo/pipelines/lora_pipeline.py
125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 | |
fastvideo.pipelines.basic.turbodiffusion.turbodiffusion_pipeline.TurboDiffusionPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/turbodiffusion/turbodiffusion_pipeline.py
Functions¶
fastvideo.pipelines.basic.wan
¶
Modules¶
fastvideo.pipelines.basic.wan.wan_causal_dmd_pipeline
¶
Wan causal DMD pipeline implementation.
This module wires the causal DMD denoising stage into the modular pipeline.
Classes¶
fastvideo.pipelines.basic.wan.wan_causal_dmd_pipeline.WanCausalDMDPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
Source code in fastvideo/pipelines/lora_pipeline.py
125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 | |
fastvideo.pipelines.basic.wan.wan_causal_dmd_pipeline.WanCausalDMDPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/wan/wan_causal_dmd_pipeline.py
Functions¶
fastvideo.pipelines.basic.wan.wan_dmd_pipeline
¶
Wan video diffusion pipeline implementation.
This module contains an implementation of the Wan video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.wan.wan_dmd_pipeline.WanDMDPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
Wan video diffusion pipeline with LoRA support.
Source code in fastvideo/pipelines/lora_pipeline.py
125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 | |
fastvideo.pipelines.basic.wan.wan_dmd_pipeline.WanDMDPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/wan/wan_dmd_pipeline.py
Functions¶
fastvideo.pipelines.basic.wan.wan_i2v_dmd_pipeline
¶
Wan video diffusion pipeline implementation.
This module contains an implementation of the Wan video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.wan.wan_i2v_dmd_pipeline.WanImageToVideoDmdPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
Source code in fastvideo/pipelines/lora_pipeline.py
125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 | |
fastvideo.pipelines.basic.wan.wan_i2v_dmd_pipeline.WanImageToVideoDmdPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/wan/wan_i2v_dmd_pipeline.py
Functions¶
fastvideo.pipelines.basic.wan.wan_i2v_pipeline
¶
Wan video diffusion pipeline implementation.
This module contains an implementation of the Wan video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.wan.wan_i2v_pipeline.WanImageToVideoPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
Source code in fastvideo/pipelines/lora_pipeline.py
125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 | |
fastvideo.pipelines.basic.wan.wan_i2v_pipeline.WanImageToVideoPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/wan/wan_i2v_pipeline.py
Functions¶
fastvideo.pipelines.basic.wan.wan_pipeline
¶
Wan video diffusion pipeline implementation.
This module contains an implementation of the Wan video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.wan.wan_pipeline.WanPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
Wan video diffusion pipeline with LoRA support.
Source code in fastvideo/pipelines/lora_pipeline.py
125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 | |
fastvideo.pipelines.basic.wan.wan_pipeline.WanPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/wan/wan_pipeline.py
Functions¶
fastvideo.pipelines.basic.wan.wan_v2v_pipeline
¶
Wan video-to-video diffusion pipeline implementation.
This module contains an implementation of the Wan video-to-video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.wan.wan_v2v_pipeline.WanVideoToVideoPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
Source code in fastvideo/pipelines/lora_pipeline.py
125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 | |
fastvideo.pipelines.basic.wan.wan_v2v_pipeline.WanVideoToVideoPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.