models
¶
Modules¶
fastvideo.models.hf_transformer_utils
¶
Utilities for Huggingface Transformers.
Functions¶
fastvideo.models.hf_transformer_utils.check_gguf_file
¶
Check if the file is a GGUF model.
Source code in fastvideo/models/hf_transformer_utils.py
fastvideo.models.hf_transformer_utils.get_diffusers_config
¶
Gets a configuration for the given diffusers model.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
str
|
The model name or path. |
required |
fastvideo_args
|
Optional inference arguments to override in the config. |
required |
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
The loaded configuration. |
Source code in fastvideo/models/hf_transformer_utils.py
fastvideo.models.loader
¶
Modules¶
fastvideo.models.loader.component_loader
¶
Classes¶
fastvideo.models.loader.component_loader.ComponentLoader
¶
Bases: ABC
Base class for loading a specific type of model component.
Source code in fastvideo/models/loader/component_loader.py
fastvideo.models.loader.component_loader.ComponentLoader.for_module_type
classmethod
¶for_module_type(module_type: str, transformers_or_diffusers: str) -> ComponentLoader
Factory method to create a component loader for a specific module type.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
module_type
|
str
|
Type of module (e.g., "vae", "text_encoder", "transformer", "scheduler") |
required |
transformers_or_diffusers
|
str
|
Whether the module is from transformers or diffusers |
required |
Returns:
| Type | Description |
|---|---|
ComponentLoader
|
A component loader for the specified module type |
Source code in fastvideo/models/loader/component_loader.py
fastvideo.models.loader.component_loader.ComponentLoader.load
abstractmethod
¶load(model_path: str, fastvideo_args: FastVideoArgs)
Load the component based on the model path, architecture, and inference args.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_path
|
str
|
Path to the component model |
required |
fastvideo_args
|
FastVideoArgs
|
FastVideoArgs |
required |
Returns:
| Type | Description |
|---|---|
|
The loaded component |
Source code in fastvideo/models/loader/component_loader.py
fastvideo.models.loader.component_loader.GenericComponentLoader
¶
Bases: ComponentLoader
Generic loader for components that don't have a specific loader.
Source code in fastvideo/models/loader/component_loader.py
fastvideo.models.loader.component_loader.GenericComponentLoader.load
¶load(model_path: str, fastvideo_args: FastVideoArgs)
Load a generic component based on the model path, and inference args.
Source code in fastvideo/models/loader/component_loader.py
fastvideo.models.loader.component_loader.ImageEncoderLoader
¶
Bases: TextEncoderLoader
Source code in fastvideo/models/loader/component_loader.py
fastvideo.models.loader.component_loader.ImageEncoderLoader.load
¶load(model_path: str, fastvideo_args: FastVideoArgs)
Load the text encoders based on the model path, and inference args.
Source code in fastvideo/models/loader/component_loader.py
fastvideo.models.loader.component_loader.ImageProcessorLoader
¶
Bases: ComponentLoader
Loader for image processor.
Source code in fastvideo/models/loader/component_loader.py
fastvideo.models.loader.component_loader.ImageProcessorLoader.load
¶load(model_path: str, fastvideo_args: FastVideoArgs)
Load the image processor based on the model path, and inference args.
Source code in fastvideo/models/loader/component_loader.py
fastvideo.models.loader.component_loader.PipelineComponentLoader
¶Utility class for loading pipeline components. This replaces the chain of if-else statements in load_pipeline_module.
fastvideo.models.loader.component_loader.PipelineComponentLoader.load_module
staticmethod
¶load_module(module_name: str, component_model_path: str, transformers_or_diffusers: str, fastvideo_args: FastVideoArgs)
Load a pipeline module.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
module_name
|
str
|
Name of the module (e.g., "vae", "text_encoder", "transformer", "scheduler") |
required |
component_model_path
|
str
|
Path to the component model |
required |
transformers_or_diffusers
|
str
|
Whether the module is from transformers or diffusers |
required |
pipeline_args
|
Inference arguments |
required |
Returns:
| Type | Description |
|---|---|
|
The loaded module |
Source code in fastvideo/models/loader/component_loader.py
fastvideo.models.loader.component_loader.SchedulerLoader
¶
Bases: ComponentLoader
Loader for scheduler.
Source code in fastvideo/models/loader/component_loader.py
fastvideo.models.loader.component_loader.SchedulerLoader.load
¶load(model_path: str, fastvideo_args: FastVideoArgs)
Load the scheduler based on the model path, and inference args.
Source code in fastvideo/models/loader/component_loader.py
fastvideo.models.loader.component_loader.TextEncoderLoader
¶
Bases: ComponentLoader
Loader for text encoders.
Source code in fastvideo/models/loader/component_loader.py
fastvideo.models.loader.component_loader.TextEncoderLoader.Source
dataclass
¶Source(model_or_path: str, prefix: str = '', fall_back_to_pt: bool = True, allow_patterns_overrides: list[str] | None = None)
A source for weights.
fastvideo.models.loader.component_loader.TextEncoderLoader.Source.allow_patterns_overrides
class-attribute
instance-attribute
¶If defined, weights will load exclusively using these patterns.
fastvideo.models.loader.component_loader.TextEncoderLoader.Source.fall_back_to_pt
class-attribute
instance-attribute
¶fall_back_to_pt: bool = True
Whether .pt weights can be used.
fastvideo.models.loader.component_loader.TextEncoderLoader.load
¶load(model_path: str, fastvideo_args: FastVideoArgs)
Load the text encoders based on the model path, and inference args.
Source code in fastvideo/models/loader/component_loader.py
fastvideo.models.loader.component_loader.TokenizerLoader
¶
Bases: ComponentLoader
Loader for tokenizers.
Source code in fastvideo/models/loader/component_loader.py
fastvideo.models.loader.component_loader.TokenizerLoader.load
¶load(model_path: str, fastvideo_args: FastVideoArgs)
Load the tokenizer based on the model path, and inference args.
Source code in fastvideo/models/loader/component_loader.py
fastvideo.models.loader.component_loader.TransformerLoader
¶
Bases: ComponentLoader
Loader for transformer.
Source code in fastvideo/models/loader/component_loader.py
fastvideo.models.loader.component_loader.TransformerLoader.load
¶load(model_path: str, fastvideo_args: FastVideoArgs)
Load the transformer based on the model path, and inference args.
Source code in fastvideo/models/loader/component_loader.py
429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 | |
fastvideo.models.loader.component_loader.VAELoader
¶
Bases: ComponentLoader
Loader for VAE.
Source code in fastvideo/models/loader/component_loader.py
fastvideo.models.loader.component_loader.VAELoader.load
¶load(model_path: str, fastvideo_args: FastVideoArgs)
Load the VAE based on the model path, and inference args.
Source code in fastvideo/models/loader/component_loader.py
Functions¶
fastvideo.models.loader.fsdp_load
¶
Functions¶
fastvideo.models.loader.fsdp_load.load_model_from_full_model_state_dict
¶load_model_from_full_model_state_dict(model: FSDPModule | Module, full_sd_iterator: Generator[tuple[str, Tensor], None, None], device: device, param_dtype: dtype, strict: bool = False, cpu_offload: bool = False, param_names_mapping: Callable[[str], tuple[str, Any, Any]] | None = None, training_mode: bool = True) -> _IncompatibleKeys
Converting full state dict into a sharded state dict
and loading it into FSDP model (if training) or normal huggingface model
Args:
model (Union[FSDPModule, torch.nn.Module]): Model to generate fully qualified names for cpu_state_dict
full_sd_iterator (Generator): an iterator yielding (param_name, tensor) pairs
device (torch.device): device used to move full state dict tensors
param_dtype (torch.dtype): dtype used to move full state dict tensors
strict (bool): flag to check if to load the model in strict mode
cpu_offload (bool): flag to check if FSDP offload is enabled
param_names_mapping (Optional[Callable[[str], str]]): a function that maps full param name to sharded param name
training_mode (bool): apply FSDP only for training
Returns:
NamedTuple with missing_keys and unexpected_keys fields:
* missing_keys is a list of str containing the missing keys
* unexpected_keys is a list of str containing the unexpected keys
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
If got FSDP with more than 1D. |
Source code in fastvideo/models/loader/fsdp_load.py
259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 | |
fastvideo.models.loader.fsdp_load.maybe_load_fsdp_model
¶maybe_load_fsdp_model(model_cls: type[Module], init_params: dict[str, Any], weight_dir_list: list[str], device: device, hsdp_replicate_dim: int, hsdp_shard_dim: int, default_dtype: dtype, param_dtype: dtype, reduce_dtype: dtype, cpu_offload: bool = False, fsdp_inference: bool = False, output_dtype: dtype | None = None, training_mode: bool = True, pin_cpu_memory: bool = True, enable_torch_compile: bool = False, torch_compile_kwargs: dict[str, Any] | None = None) -> Module
Load the model with FSDP if is training, else load the model without FSDP.
Source code in fastvideo/models/loader/fsdp_load.py
60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 | |
fastvideo.models.loader.fsdp_load.set_default_dtype
¶set_default_dtype(dtype: dtype) -> Generator[None, None, None]
Context manager to set torch's default dtype.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dtype
|
dtype
|
The desired default dtype inside the context manager. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
ContextManager |
None
|
context manager for setting default dtype. |
Example
with set_default_dtype(torch.bfloat16): x = torch.tensor([1, 2, 3]) x.dtype torch.bfloat16
Source code in fastvideo/models/loader/fsdp_load.py
fastvideo.models.loader.fsdp_load.shard_model
¶shard_model(model, *, cpu_offload: bool, reshard_after_forward: bool = True, mp_policy: MixedPrecisionPolicy | None = MixedPrecisionPolicy(), mesh: DeviceMesh | None = None, fsdp_shard_conditions: list[Callable[[str, Module], bool]] = [], pin_cpu_memory: bool = True) -> None
Utility to shard a model with FSDP using the PyTorch Distributed fully_shard API.
This method will over the model's named modules from the bottom-up and apply shard modules based on whether they meet any of the criteria from shard_conditions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
TransformerDecoder
|
Model to shard with FSDP. |
required |
shard_conditions
|
List[Callable[[str, Module], bool]]
|
A list of functions to determine which modules to shard with FSDP. Each function should take module name (relative to root) and the module itself, returning True if FSDP should shard the module and False otherwise. If any of shard_conditions return True for a given module, it will be sharded by FSDP. |
required |
cpu_offload
|
bool
|
If set to True, FSDP will offload parameters, gradients, and optimizer states to CPU. |
required |
reshard_after_forward
|
bool
|
Whether to reshard parameters and buffers after the forward pass. Setting this to True corresponds to the FULL_SHARD sharding strategy from FSDP1, while setting it to False corresponds to the SHARD_GRAD_OP sharding strategy. |
True
|
mesh
|
Optional[DeviceMesh]
|
Device mesh to use for FSDP sharding under multiple parallelism. Default to None. |
None
|
fsdp_shard_conditions
|
List[Callable[[str, Module], bool]]
|
A list of functions to determine which modules to shard with FSDP. |
[]
|
pin_cpu_memory
|
bool
|
If set to True, FSDP will pin the CPU memory of the offloaded parameters. |
True
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If no layer modules were sharded, indicating that no shard_condition was triggered. |
Source code in fastvideo/models/loader/fsdp_load.py
166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 | |
fastvideo.models.loader.utils
¶
Utilities for selecting and loading models.
Functions¶
fastvideo.models.loader.utils.get_param_names_mapping
¶Creates a mapping function that transforms parameter names using regex patterns.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mapping_dict
|
Dict[str, str]
|
Dictionary mapping regex patterns to replacement patterns |
required |
param_name
|
str
|
The parameter name to be transformed |
required |
Returns:
| Type | Description |
|---|---|
Callable[[str], tuple[str, Any, Any]]
|
Callable[[str], str]: A function that maps parameter names from source to target format |
Source code in fastvideo/models/loader/utils.py
fastvideo.models.loader.utils.hf_to_custom_state_dict
¶hf_to_custom_state_dict(hf_param_sd: dict[str, Tensor] | Iterator[tuple[str, Tensor]], param_names_mapping: Callable[[str], tuple[str, Any, Any]]) -> tuple[dict[str, Tensor], dict[str, tuple[str, Any, Any]]]
Converts a Hugging Face parameter state dictionary to a custom parameter state dictionary.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
hf_param_sd
|
Dict[str, Tensor]
|
The Hugging Face parameter state dictionary |
required |
param_names_mapping
|
Callable[[str], tuple[str, Any, Any]]
|
A function that maps parameter names from source to target format |
required |
Returns:
| Name | Type | Description |
|---|---|---|
custom_param_sd |
Dict[str, Tensor]
|
The custom formatted parameter state dict |
reverse_param_names_mapping |
Dict[str, Tuple[str, Any, Any]]
|
Maps back from custom to hf |
Source code in fastvideo/models/loader/utils.py
fastvideo.models.loader.utils.set_default_torch_dtype
¶Sets the default torch dtype to the given dtype.
Source code in fastvideo/models/loader/utils.py
fastvideo.models.loader.weight_utils
¶
Utilities for downloading and initializing model weights.
Functions¶
fastvideo.models.loader.weight_utils.default_weight_loader
¶Default weight loader.
Source code in fastvideo/models/loader/weight_utils.py
fastvideo.models.loader.weight_utils.enable_hf_transfer
¶automatically activates hf_transfer
Source code in fastvideo/models/loader/weight_utils.py
fastvideo.models.loader.weight_utils.filter_files_not_needed_for_inference
¶Exclude files that are not needed for inference.
See https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/trainer.py#L227-L233
Source code in fastvideo/models/loader/weight_utils.py
fastvideo.models.loader.weight_utils.maybe_remap_kv_scale_name
¶Remap the name of FP8 k/v_scale parameters.
This function handles the remapping of FP8 k/v_scale parameter names. It detects if the given name ends with a suffix and attempts to remap it to the expected name format in the model. If the remapped name is not found in the params_dict, a warning is printed and None is returned.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
The original loaded checkpoint parameter name. |
required |
params_dict
|
dict
|
Dictionary containing the model's named parameters. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
str |
str | None
|
The remapped parameter name if successful, or the original name if no remapping is needed. |
None |
str | None
|
If the remapped name is not found in params_dict. |
Source code in fastvideo/models/loader/weight_utils.py
fastvideo.models.loader.weight_utils.pt_weights_iterator
¶pt_weights_iterator(hf_weights_files: list[str], to_cpu: bool = True) -> Generator[tuple[str, Tensor], None, None]
Iterate over the weights in the model bin/pt files.
Source code in fastvideo/models/loader/weight_utils.py
fastvideo.models.loader.weight_utils.safetensors_weights_iterator
¶safetensors_weights_iterator(hf_weights_files: list[str], to_cpu: bool = True) -> Generator[tuple[str, Tensor], None, None]
Iterate over the weights in the model safetensor files.
Source code in fastvideo/models/loader/weight_utils.py
fastvideo.models.parameter
¶
Classes¶
fastvideo.models.parameter.BasevLLMParameter
¶
BasevLLMParameter(data: Tensor, weight_loader: Callable)
Bases: Parameter
Base parameter for vLLM linear layers. Extends the torch.nn.parameter by taking in a linear weight loader. Will copy the loaded weight into the parameter when the provided weight loader is called.
Initialize the BasevLLMParameter
:param data: torch tensor with the parameter data :param weight_loader: weight loader callable
:returns: a torch.nn.parameter
Source code in fastvideo/models/parameter.py
Functions¶
fastvideo.models.parameter.BlockQuantScaleParameter
¶
BlockQuantScaleParameter(output_dim: int, **kwargs)
Bases: _ColumnvLLMParameter, RowvLLMParameter
Parameter class for weight scales loaded for weights with block-wise quantization. Uses both column and row parallelism.
Source code in fastvideo/models/parameter.py
fastvideo.models.parameter.ChannelQuantScaleParameter
¶
ChannelQuantScaleParameter(output_dim: int, **kwargs)
Bases: _ColumnvLLMParameter
Parameter class for weight scales loaded for weights with channel-wise quantization. Equivalent to _ColumnvLLMParameter.
Source code in fastvideo/models/parameter.py
fastvideo.models.parameter.GroupQuantScaleParameter
¶
GroupQuantScaleParameter(output_dim: int, **kwargs)
Bases: _ColumnvLLMParameter, RowvLLMParameter
Parameter class for weight scales loaded for weights with grouped quantization. Uses both column and row parallelism.
Source code in fastvideo/models/parameter.py
fastvideo.models.parameter.ModelWeightParameter
¶
ModelWeightParameter(output_dim: int, **kwargs)
Bases: _ColumnvLLMParameter, RowvLLMParameter
Parameter class for linear layer weights. Uses both column and row parallelism.
Source code in fastvideo/models/parameter.py
fastvideo.models.parameter.PackedColumnParameter
¶
Bases: _ColumnvLLMParameter
Parameter for model parameters which are packed on disk and support column parallelism only. See PackedvLLMParameter for more details on the packed properties.
Source code in fastvideo/models/parameter.py
fastvideo.models.parameter.PackedvLLMParameter
¶
Bases: ModelWeightParameter
Parameter for model weights which are packed on disk. Example: GPTQ Marlin weights are int4 or int8, packed into int32. Extends the ModelWeightParameter to take in the packed factor, the packed dimension, and optionally, marlin tile size for marlin kernels. Adjusts the shard_size and shard_offset for fused linear layers model weight loading by accounting for packing and optionally, marlin tile size.
Source code in fastvideo/models/parameter.py
fastvideo.models.parameter.PerTensorScaleParameter
¶
Bases: BasevLLMParameter
Parameter class for scales where the number of scales is equivalent to the number of logical matrices in fused linear layers (e.g. for QKV, there are 3 scales loaded from disk). This is relevant to weights with per-tensor quantization. Adds functionality to map the scalers to a shard during weight loading.
Note: additional parameter manipulation may be handled for each quantization config specifically, within process_weights_after_loading
Source code in fastvideo/models/parameter.py
fastvideo.models.parameter.RowvLLMParameter
¶
RowvLLMParameter(input_dim: int, **kwargs)
Bases: BasevLLMParameter
Parameter class defining weight_loading functionality (load_row_parallel_weight) for parameters being loaded into linear layers with row parallel functionality. Requires an input_dim to be defined.
Source code in fastvideo/models/parameter.py
Functions¶
fastvideo.models.parameter.permute_param_layout_
¶
permute_param_layout_(param: BasevLLMParameter, input_dim: int, output_dim: int, **kwargs) -> BasevLLMParameter
Permute a parameter's layout to the specified input and output dimensions, useful for forcing the parameter into a known layout, for example, if I need a packed (quantized) weight matrix to be in the layout {input_dim = 0, output_dim = 1, packed_dim = 0} then I can call: permute_param_layout_(x, input_dim=0, output_dim=1, packed_dim=0) to ensure x is in the correct layout (permuting it to the correct layout if required, asserting if it cannot get it to the correct layout)
Source code in fastvideo/models/parameter.py
fastvideo.models.utils
¶
Utils for model executor.
Functions¶
fastvideo.models.utils.auto_attributes
¶
Decorator that automatically adds all initialization arguments as object attributes.
Example
@auto_attributes def init(self, a=1, b=2): pass
This will automatically set:¶
- self.a = 1 and self.b = 2¶
- self.config.a = 1 and self.config.b = 2¶
Source code in fastvideo/models/utils.py
fastvideo.models.utils.extract_layer_index
¶
Extract the layer index from the module name. Examples: - "encoder.layers.0" -> 0 - "encoder.layers.1.self_attn" -> 1 - "2.self_attn" -> 2 - "model.encoder.layers.0.sub.1" -> ValueError
Source code in fastvideo/models/utils.py
fastvideo.models.utils.modulate
¶
modulate by shift and scale
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
input tensor. |
required |
shift
|
Tensor
|
shift tensor. Defaults to None. |
None
|
scale
|
Tensor
|
scale tensor. Defaults to None. |
None
|
Returns:
| Type | Description |
|---|---|
Tensor
|
torch.Tensor: the output tensor after modulate. |
Source code in fastvideo/models/utils.py
fastvideo.models.utils.pred_noise_to_pred_video
¶
pred_noise_to_pred_video(pred_noise: Tensor, noise_input_latent: Tensor, timestep: Tensor, scheduler: Any) -> Tensor
Convert predicted noise to clean latent.
pred_noise: the predicted noise with shape [B, C, H, W] where B is batch_size or batch_size * num_frames noise_input_latent: the noisy latent with shape [B, C, H, W], timestep: the timestep with shape [1] or [bs * num_frames] or [bs, num_frames] scheduler: the scheduler
Returns:
| Type | Description |
|---|---|
Tensor
|
the predicted video with shape [B, C, H, W] |
Source code in fastvideo/models/utils.py
fastvideo.models.utils.set_weight_attrs
¶
Set attributes on a weight tensor.
This method is used to set attributes on a weight tensor. This method will not overwrite existing attributes.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
weight
|
Tensor
|
The weight tensor. |
required |
weight_attrs
|
dict[str, Any] | None
|
A dictionary of attributes to set on the weight tensor. |
required |
Source code in fastvideo/models/utils.py
fastvideo.models.vision_utils
¶
Functions¶
fastvideo.models.vision_utils.create_default_image
¶
create_default_image(width: int = 512, height: int = 512, color: tuple[int, int, int] = (0, 0, 0)) -> Image
Create a default black PIL image.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
width
|
int
|
Image width in pixels |
512
|
height
|
int
|
Image height in pixels |
512
|
color
|
tuple[int, int, int]
|
RGB color tuple |
(0, 0, 0)
|
Returns:
| Type | Description |
|---|---|
Image
|
PIL.Image.Image: A new PIL image with specified dimensions and color |
Source code in fastvideo/models/vision_utils.py
fastvideo.models.vision_utils.get_default_height_width
¶
get_default_height_width(image: Image | ndarray | Tensor, vae_scale_factor: int, height: int | None = None, width: int | None = None) -> tuple[int, int]
Returns the height and width of the image, downscaled to the next integer multiple of vae_scale_factor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
image
|
`Union[PIL.Image.Image, np.ndarray, torch.Tensor]`
|
The image input, which can be a PIL image, NumPy array, or PyTorch tensor. If it is a NumPy array, it
should have shape |
required |
height
|
`Optional[int]`, *optional*, defaults to `None`
|
The height of the preprocessed image. If |
None
|
width
|
`Optional[int]`, *optional*, defaults to `None`
|
The width of the preprocessed image. If |
None
|
Returns:
| Type | Description |
|---|---|
tuple[int, int]
|
|
Source code in fastvideo/models/vision_utils.py
fastvideo.models.vision_utils.load_image
¶
Loads image to a PIL Image.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
image
|
`str` or `PIL.Image.Image`
|
The image to convert to the PIL Image format. |
required |
convert_method
|
Callable[[PIL.Image.Image], PIL.Image.Image], *optional*
|
A conversion method to apply to the image after loading it. When set to |
None
|
Returns:
| Type | Description |
|---|---|
Image
|
|
Source code in fastvideo/models/vision_utils.py
fastvideo.models.vision_utils.load_video
¶
load_video(video: str, convert_method: Callable[[list[Image]], list[Image]] | None = None, return_fps: bool = False) -> tuple[list[Image], float | Any] | list[Image]
Loads video to a list of PIL Image.
Args:
video (str):
A URL or Path to a video to convert to a list of PIL Image format.
convert_method (Callable[[List[PIL.Image.Image]], List[PIL.Image.Image]], optional):
A conversion method to apply to the video after loading it. When set to None the images will be converted
to "RGB".
return_fps (bool, optional, defaults to False):
Whether to return the FPS of the video. If True, returns a tuple of (images, fps).
If False, returns only the list of images.
Returns:
List[PIL.Image.Image] or Tuple[List[PIL.Image.Image], float | None]:
The video as a list of PIL images. If return_fps is True, also returns the original FPS.
Source code in fastvideo/models/vision_utils.py
fastvideo.models.vision_utils.normalize
¶
Normalize an image array to [-1,1].
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
images
|
`np.ndarray` or `torch.Tensor`
|
The image array to normalize. |
required |
Returns:
| Type | Description |
|---|---|
ndarray | Tensor
|
|
Source code in fastvideo/models/vision_utils.py
fastvideo.models.vision_utils.numpy_to_pt
¶
Convert a NumPy image to a PyTorch tensor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
images
|
`np.ndarray`
|
The NumPy image array to convert to PyTorch format. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
|
Source code in fastvideo/models/vision_utils.py
fastvideo.models.vision_utils.pil_to_numpy
¶
pil_to_numpy(images: list[Image] | Image) -> ndarray
Convert a PIL image or a list of PIL images to NumPy arrays.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
images
|
`PIL.Image.Image` or `List[PIL.Image.Image]`
|
The PIL image or list of images to convert to NumPy format. |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
|
Source code in fastvideo/models/vision_utils.py
fastvideo.models.vision_utils.preprocess_reference_image_for_clip
¶
Preprocess reference image to match CLIP encoder requirements.
Applies normalization, resizing to 224x224, and denormalization to ensure the image is in the correct format for CLIP processing.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
image
|
Image
|
Input PIL image |
required |
device
|
device
|
Target device for tensor operations |
required |
Returns:
| Type | Description |
|---|---|
Image
|
Preprocessed PIL image ready for CLIP encoder |
Source code in fastvideo/models/vision_utils.py
fastvideo.models.vision_utils.resize
¶
resize(image: Image | ndarray | Tensor, height: int, width: int, resize_mode: str = 'default', resample: str = 'lanczos') -> Image | ndarray | Tensor
Resize image.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
image
|
`PIL.Image.Image`, `np.ndarray` or `torch.Tensor`
|
The image input, can be a PIL image, numpy array or pytorch tensor. |
required |
height
|
`int`
|
The height to resize to. |
required |
width
|
`int`
|
The width to resize to. |
required |
resize_mode
|
`str`, *optional*, defaults to `default`
|
The resize mode to use, can be one of |
'default'
|
Returns:
| Type | Description |
|---|---|
Image | ndarray | Tensor
|
|