fastvideo.v1.pipelines.lora_pipeline#

Module Contents#

Classes#

LoRAPipeline

Pipeline that supports injecting LoRA adapters into the diffusion transformer. TODO: support training.

Data#

API#

class fastvideo.v1.pipelines.lora_pipeline.LoRAPipeline(*args, **kwargs)[source]#

Bases: fastvideo.v1.pipelines.composed_pipeline_base.ComposedPipelineBase

Pipeline that supports injecting LoRA adapters into the diffusion transformer. TODO: support training.

Initialization

Initialize the pipeline. After init, the pipeline should be ready to use. The pipeline should be stateless and not hold any batch state.

convert_to_lora_layers() None[source]#

Unified method to convert the transformer to a LoRA transformer.

cur_adapter_name: str = <Multiline-String>[source]#
cur_adapter_path: str = <Multiline-String>[source]#
device: torch.device[source]#

‘get_local_torch_device(…)’

exclude_lora_layers: list[str][source]#

[]

fastvideo_args: fastvideo.v1.fastvideo_args.FastVideoArgs | fastvideo.v1.fastvideo_args.TrainingArgs[source]#

None

is_target_layer(module_name: str) bool[source]#
lora_adapters: dict[str, dict[str, torch.Tensor]][source]#

‘defaultdict(…)’

lora_alpha: int | None[source]#

None

lora_initialized: bool[source]#

False

lora_layers: dict[str, fastvideo.v1.layers.lora.linear.BaseLayerWithLoRA][source]#

None

lora_nickname: str[source]#

‘default’

lora_path: str | None[source]#

None

lora_rank: int | None[source]#

None

lora_target_modules: list[str] | None[source]#

None

set_lora_adapter(lora_nickname: str, lora_path: str | None = None)[source]#

Load a LoRA adapter into the pipeline and merge it into the transformer.

Parameters:
  • lora_nickname – The “nick name” of the adapter when referenced in the pipeline.

  • lora_path – The path to the adapter, either a local path or a Hugging Face repo id.

fastvideo.v1.pipelines.lora_pipeline.logger[source]#

‘init_logger(…)’