fastvideo.v1.dataset.transform#

Module Contents#

Classes#

CenterCropResizeVideo

First use the short side for cropping length, center crop video, then resize to the specified size

Normalize255

Convert tensor data type from uint8 to float, divide value by 255.0 and

TemporalRandomCrop

Temporally crop the given frame indices at a random location.

Functions#

center_crop_th_tw

crop

param clip:

Video clip to be cropped. Size is (T, C, H, W)

type clip:

torch.tensor

normalize_video

Convert tensor data type from uint8 to float, divide value by 255.0 and permute the dimensions of clip tensor

resize

API#

class fastvideo.v1.dataset.transform.CenterCropResizeVideo(size, top_crop=False, interpolation_mode='bilinear')[source]#

First use the short side for cropping length, center crop video, then resize to the specified size

Initialization

class fastvideo.v1.dataset.transform.Normalize255[source]#

Convert tensor data type from uint8 to float, divide value by 255.0 and

Initialization

class fastvideo.v1.dataset.transform.TemporalRandomCrop(size)[source]#

Temporally crop the given frame indices at a random location.

Parameters:

size (int) – Desired length of frames will be seen in the model.

Initialization

fastvideo.v1.dataset.transform.center_crop_th_tw(clip, th, tw, top_crop) torch.Tensor[source]#
fastvideo.v1.dataset.transform.crop(clip, i, j, h, w) torch.Tensor[source]#
Parameters:

clip (torch.tensor) – Video clip to be cropped. Size is (T, C, H, W)

fastvideo.v1.dataset.transform.normalize_video(clip) torch.Tensor[source]#

Convert tensor data type from uint8 to float, divide value by 255.0 and permute the dimensions of clip tensor

Parameters:

clip (torch.tensor, dtype=torch.uint8) – Size is (T, C, H, W)

Returns:

Size is (T, C, H, W)

Return type:

clip (torch.tensor, dtype=torch.float)

fastvideo.v1.dataset.transform.resize(clip, target_size, interpolation_mode) torch.Tensor[source]#