Skip to content

longcat

Classes

fastvideo.configs.pipelines.longcat.LongCatDiTArchConfig dataclass

LongCatDiTArchConfig(stacked_params_mapping: list[tuple[str, str, str]] = list(), _fsdp_shard_conditions: list = list(), _compile_conditions: list = list(), param_names_mapping: dict = dict(), reverse_param_names_mapping: dict = dict(), lora_param_names_mapping: dict = dict(), _supported_attention_backends: tuple[AttentionBackendEnum, ...] = (SLIDING_TILE_ATTN, SAGE_ATTN, FLASH_ATTN, TORCH_SDPA, VIDEO_SPARSE_ATTN, VMOBA_ATTN, SAGE_ATTN_THREE), hidden_size: int = 0, num_attention_heads: int = 0, num_channels_latents: int = 0, in_channels: int = 16, out_channels: int = 16, exclude_lora_layers: list[str] = list(), boundary_ratio: float | None = None, adaln_tembed_dim: int = 512, caption_channels: int = 4096, depth: int = 48, enable_bsa: bool = False, enable_flashattn3: bool = False, enable_flashattn2: bool = True, enable_xformers: bool = False, frequency_embedding_size: int = 256, mlp_ratio: int = 4, num_heads: int = 32, text_tokens_zero_pad: bool = True, patch_size: list[int] = (lambda: [1, 2, 2])(), cp_split_hw: list[int] | None = None, bsa_params: dict | None = None)

Bases: DiTArchConfig

Extended DiTArchConfig with LongCat-specific fields.

NOTE: This is for Phase 1 wrapper compatibility. For native model (Phase 2), use LongCatVideoConfig from fastvideo.configs.models.dits.longcat instead.

fastvideo.configs.pipelines.longcat.LongCatT2V480PConfig dataclass

LongCatT2V480PConfig(model_path: str = '', pipeline_config_path: str | None = None, embedded_cfg_scale: float = 6.0, flow_shift: float | None = None, disable_autocast: bool = False, is_causal: bool = False, dit_config: DiTConfig = (lambda: DiTConfig(arch_config=LongCatDiTArchConfig()))(), dit_precision: str = 'bf16', vae_config: VAEConfig = WanVAEConfig(), vae_precision: str = 'bf16', vae_tiling: bool = False, vae_sp: bool = False, image_encoder_config: EncoderConfig = EncoderConfig(), image_encoder_precision: str = 'fp32', text_encoder_configs: tuple[T5Config, ...] = (lambda: (T5Config(),))(), text_encoder_precisions: tuple[str, ...] = (lambda: ('bf16',))(), preprocess_text_funcs: tuple[Callable[[str], str], ...] = (lambda: (longcat_preprocess_text,))(), postprocess_text_funcs: tuple[Callable[[BaseEncoderOutput], Tensor], ...] = (lambda: (umt5_postprocess_text,))(), pos_magic: str | None = None, neg_magic: str | None = None, timesteps_scale: bool | None = None, mask_strategy_file_path: str | None = None, STA_mode: STA_Mode = STA_INFERENCE, skip_time_steps: int = 15, dmd_denoising_steps: list[int] | None = None, ti2v_task: bool = False, boundary_ratio: float | None = None, enable_kv_cache: bool = True, offload_kv_cache: bool = False, enable_bsa: bool = False, use_distill: bool = False, enhance_hf: bool = False, bsa_params: dict | None = None, bsa_sparsity: float | None = None, bsa_cdf_threshold: float | None = None, bsa_chunk_q: list[int] | None = None, bsa_chunk_k: list[int] | None = None, t_thresh: float | None = None)

Bases: PipelineConfig

Configuration for LongCat pipeline (480p) aligned to LongCat-Video modules.

Components expected by loaders
  • tokenizer: AutoTokenizer
  • text_encoder: UMT5EncoderModel
  • transformer: LongCatVideoTransformer3DModel (Phase 1 wrapper) OR LongCatTransformer3DModel (Phase 2 native)
  • vae: AutoencoderKLWan (Wan VAE, 4x8 compression)
  • scheduler: FlowMatchEulerDiscreteScheduler

fastvideo.configs.pipelines.longcat.LongCatT2V704PConfig dataclass

LongCatT2V704PConfig(model_path: str = '', pipeline_config_path: str | None = None, embedded_cfg_scale: float = 6.0, flow_shift: float | None = None, disable_autocast: bool = False, is_causal: bool = False, dit_config: DiTConfig = (lambda: DiTConfig(arch_config=LongCatDiTArchConfig()))(), dit_precision: str = 'bf16', vae_config: VAEConfig = WanVAEConfig(), vae_precision: str = 'bf16', vae_tiling: bool = False, vae_sp: bool = False, image_encoder_config: EncoderConfig = EncoderConfig(), image_encoder_precision: str = 'fp32', text_encoder_configs: tuple[T5Config, ...] = (lambda: (T5Config(),))(), text_encoder_precisions: tuple[str, ...] = (lambda: ('bf16',))(), preprocess_text_funcs: tuple[Callable[[str], str], ...] = (lambda: (longcat_preprocess_text,))(), postprocess_text_funcs: tuple[Callable[[BaseEncoderOutput], Tensor], ...] = (lambda: (umt5_postprocess_text,))(), pos_magic: str | None = None, neg_magic: str | None = None, timesteps_scale: bool | None = None, mask_strategy_file_path: str | None = None, STA_mode: STA_Mode = STA_INFERENCE, skip_time_steps: int = 15, dmd_denoising_steps: list[int] | None = None, ti2v_task: bool = False, boundary_ratio: float | None = None, enable_kv_cache: bool = True, offload_kv_cache: bool = False, enable_bsa: bool = True, use_distill: bool = False, enhance_hf: bool = False, bsa_params: dict | None = None, bsa_sparsity: float | None = None, bsa_cdf_threshold: float | None = None, bsa_chunk_q: list[int] | None = None, bsa_chunk_k: list[int] | None = None, t_thresh: float | None = None)

Bases: LongCatT2V480PConfig

Configuration for LongCat pipeline (704p) with BSA enabled by default.

Uses the same resolution and BSA parameters as original LongCat refinement stage. BSA parameters configured in transformer config.json with chunk_3d_shape=[4,4,4]: - Input: 704×1280×96 - VAE (8x): 88×160×96
- Patch [1,2,2]: 44×80×96 - chunk [4,4,4]: 96%4=0, 44%4=0, 80%4=0 ✅

This configuration matches the original LongCat refinement stage parameters.

Functions

fastvideo.configs.pipelines.longcat.longcat_preprocess_text

longcat_preprocess_text(prompt: str) -> str

Clean and preprocess text like original LongCat implementation.

This function applies the same text cleaning pipeline as the original LongCat-Video implementation to ensure identical tokenization results.

Steps: 1. basic_clean: Fix unicode issues and unescape HTML entities 2. whitespace_clean: Normalize whitespace to single spaces

Parameters:

Name Type Description Default
prompt str

Raw input text prompt

required

Returns:

Type Description
str

Cleaned and normalized text prompt

Source code in fastvideo/configs/pipelines/longcat.py
def longcat_preprocess_text(prompt: str) -> str:
    """Clean and preprocess text like original LongCat implementation.

    This function applies the same text cleaning pipeline as the original
    LongCat-Video implementation to ensure identical tokenization results.

    Steps:
    1. basic_clean: Fix unicode issues and unescape HTML entities
    2. whitespace_clean: Normalize whitespace to single spaces

    Args:
        prompt: Raw input text prompt

    Returns:
        Cleaned and normalized text prompt
    """
    # basic_clean: fix unicode and HTML entities
    text = ftfy.fix_text(prompt)
    text = html.unescape(html.unescape(text))
    text = text.strip()

    # whitespace_clean: normalize whitespace
    text = re.sub(r"\s+", " ", text)
    text = text.strip()

    return text

fastvideo.configs.pipelines.longcat.umt5_postprocess_text

umt5_postprocess_text(outputs: BaseEncoderOutput) -> Tensor

Postprocess UMT5/T5 encoder outputs to fixed length 512 embeddings.

Source code in fastvideo/configs/pipelines/longcat.py
def umt5_postprocess_text(outputs: BaseEncoderOutput) -> torch.Tensor:
    """
    Postprocess UMT5/T5 encoder outputs to fixed length 512 embeddings.
    """
    mask: torch.Tensor = outputs.attention_mask
    hidden_state: torch.Tensor = outputs.last_hidden_state
    seq_lens = mask.gt(0).sum(dim=1).long()
    assert torch.isnan(hidden_state).sum() == 0
    prompt_embeds = [u[:v] for u, v in zip(hidden_state, seq_lens, strict=True)]
    prompt_embeds_tensor: torch.Tensor = torch.stack([
        torch.cat([u, u.new_zeros(512 - u.size(0), u.size(1))])
        for u in prompt_embeds
    ],
                                                     dim=0)
    return prompt_embeds_tensor