platforms
¶
Classes¶
fastvideo.platforms.Platform
¶
Functions¶
fastvideo.platforms.Platform.get_attn_backend_cls
classmethod
¶
get_attn_backend_cls(selected_backend: AttentionBackendEnum | None, head_size: int, dtype: dtype) -> str
Get the attention backend class of a device.
fastvideo.platforms.Platform.get_cpu_architecture
classmethod
¶
fastvideo.platforms.Platform.get_current_memory_usage
classmethod
¶
get_current_memory_usage(device: Device | None = None) -> float
fastvideo.platforms.Platform.get_device_capability
classmethod
¶
get_device_capability(device_id: int = 0) -> DeviceCapability | None
Stateless version of :func:torch.cuda.get_device_capability.
fastvideo.platforms.Platform.get_device_communicator_cls
classmethod
¶
get_device_communicator_cls() -> str
Get device specific communicator class for distributed communication.
Source code in fastvideo/platforms/interface.py
fastvideo.platforms.Platform.get_device_name
classmethod
¶
fastvideo.platforms.Platform.get_device_total_memory
classmethod
¶
fastvideo.platforms.Platform.get_device_uuid
classmethod
¶
fastvideo.platforms.Platform.get_torch_device
classmethod
¶
fastvideo.platforms.Platform.has_device_capability
classmethod
¶
Test whether this platform is compatible with a device capability.
The capability argument can either be:
- A tuple
(major, minor). - An integer
<major><minor>. (See :meth:DeviceCapability.to_int)
Source code in fastvideo/platforms/interface.py
fastvideo.platforms.Platform.inference_mode
classmethod
¶
A device-specific wrapper of torch.inference_mode.
This wrapper is recommended because some hardware backends such as TPU
do not support torch.inference_mode. In such a case, they will fall
back to torch.no_grad by overriding this method.
Source code in fastvideo/platforms/interface.py
fastvideo.platforms.Platform.is_async_output_supported
classmethod
¶
fastvideo.platforms.Platform.seed_everything
classmethod
¶
seed_everything(seed: int | None = None) -> None
Set the seed of each random module.
torch.manual_seed will set seed on all devices.
Loosely based on: https://github.com/Lightning-AI/pytorch-lightning/blob/2.4.0/src/lightning/fabric/utilities/seed.py#L20
Source code in fastvideo/platforms/interface.py
fastvideo.platforms.Platform.verify_model_arch
classmethod
¶
verify_model_arch(model_arch: str) -> None
Verify whether the current platform supports the specified model architecture.
- This will raise an Error or Warning based on the model support on the current platform.
- By default all models are considered supported.
Source code in fastvideo/platforms/interface.py
fastvideo.platforms.Platform.verify_quantization
classmethod
¶
verify_quantization(quant: str) -> None
Verify whether the quantization is supported by the current platform.
Source code in fastvideo/platforms/interface.py
Functions¶
fastvideo.platforms.mps_platform_plugin
¶
mps_platform_plugin() -> str | None
Detect if MPS (Metal Performance Shaders) is available on macOS.
Source code in fastvideo/platforms/__init__.py
Modules¶
fastvideo.platforms.cpu
¶
fastvideo.platforms.cuda
¶
Code inside this file can safely assume cuda platform, e.g. importing pynvml. However, it should not initialize cuda context.
Classes¶
fastvideo.platforms.cuda.CudaPlatformBase
¶
fastvideo.platforms.cuda.NvmlCudaPlatform
¶
Bases: CudaPlatformBase
Functions¶
fastvideo.platforms.cuda.NvmlCudaPlatform.is_full_nvlink
classmethod
¶query if the set of gpus are fully connected by nvlink (1 hop)
Source code in fastvideo/platforms/cuda.py
Functions¶
fastvideo.platforms.interface
¶
Classes¶
fastvideo.platforms.interface.DeviceCapability
¶
Bases: NamedTuple
fastvideo.platforms.interface.Platform
¶
Functions¶
fastvideo.platforms.interface.Platform.get_attn_backend_cls
classmethod
¶get_attn_backend_cls(selected_backend: AttentionBackendEnum | None, head_size: int, dtype: dtype) -> str
Get the attention backend class of a device.
fastvideo.platforms.interface.Platform.get_cpu_architecture
classmethod
¶ fastvideo.platforms.interface.Platform.get_current_memory_usage
classmethod
¶get_current_memory_usage(device: Device | None = None) -> float
fastvideo.platforms.interface.Platform.get_device_capability
classmethod
¶get_device_capability(device_id: int = 0) -> DeviceCapability | None
Stateless version of :func:torch.cuda.get_device_capability.
fastvideo.platforms.interface.Platform.get_device_communicator_cls
classmethod
¶get_device_communicator_cls() -> str
Get device specific communicator class for distributed communication.
Source code in fastvideo/platforms/interface.py
fastvideo.platforms.interface.Platform.get_device_name
classmethod
¶ fastvideo.platforms.interface.Platform.get_device_total_memory
classmethod
¶ fastvideo.platforms.interface.Platform.get_device_uuid
classmethod
¶ fastvideo.platforms.interface.Platform.get_torch_device
classmethod
¶ fastvideo.platforms.interface.Platform.has_device_capability
classmethod
¶Test whether this platform is compatible with a device capability.
The capability argument can either be:
- A tuple
(major, minor). - An integer
<major><minor>. (See :meth:DeviceCapability.to_int)
Source code in fastvideo/platforms/interface.py
fastvideo.platforms.interface.Platform.inference_mode
classmethod
¶A device-specific wrapper of torch.inference_mode.
This wrapper is recommended because some hardware backends such as TPU
do not support torch.inference_mode. In such a case, they will fall
back to torch.no_grad by overriding this method.
Source code in fastvideo/platforms/interface.py
fastvideo.platforms.interface.Platform.is_async_output_supported
classmethod
¶ fastvideo.platforms.interface.Platform.seed_everything
classmethod
¶seed_everything(seed: int | None = None) -> None
Set the seed of each random module.
torch.manual_seed will set seed on all devices.
Loosely based on: https://github.com/Lightning-AI/pytorch-lightning/blob/2.4.0/src/lightning/fabric/utilities/seed.py#L20
Source code in fastvideo/platforms/interface.py
fastvideo.platforms.interface.Platform.verify_model_arch
classmethod
¶verify_model_arch(model_arch: str) -> None
Verify whether the current platform supports the specified model architecture.
- This will raise an Error or Warning based on the model support on the current platform.
- By default all models are considered supported.
Source code in fastvideo/platforms/interface.py
fastvideo.platforms.interface.Platform.verify_quantization
classmethod
¶verify_quantization(quant: str) -> None
Verify whether the quantization is supported by the current platform.