fastvideo.v1.layers.vocab_parallel_embedding
#
Module Contents#
Classes#
Unquantized method for embeddings. |
|
Embedding parallelized in the vocabulary dimension. |
|
Indices for a shard of a vocab parallel embedding. |
Functions#
Pad the vocab size to the given value. |
|
Data#
API#
- class fastvideo.v1.layers.vocab_parallel_embedding.UnquantizedEmbeddingMethod[source]#
Bases:
fastvideo.v1.layers.quantization.base_config.QuantizeMethodBase
Unquantized method for embeddings.
- apply(layer: torch.nn.Module, x: torch.Tensor, bias: Optional[torch.Tensor] = None) torch.Tensor [source]#
- create_weights(layer: torch.nn.Module, input_size_per_partition: int, output_partition_sizes: List[int], input_size: int, output_size: int, params_dtype: torch.dtype, **extra_weight_attrs)[source]#
Create weights for embedding layer.
- embedding(layer: torch.nn.Module, input_: torch.Tensor) torch.Tensor [source]#
- class fastvideo.v1.layers.vocab_parallel_embedding.VocabParallelEmbedding(num_embeddings: int, embedding_dim: int, params_dtype: Optional[torch.dtype] = None, org_num_embeddings: Optional[int] = None, padding_size: int = DEFAULT_VOCAB_PADDING_SIZE, quant_config: Optional[fastvideo.v1.layers.quantization.base_config.QuantizationConfig] = None, prefix: str = '')[source]#
Bases:
torch.nn.Module
Embedding parallelized in the vocabulary dimension.
Adapted from torch.nn.Embedding, note that we pad the vocabulary size to make sure it is divisible by the number of model parallel GPUs.
In order to support various loading methods, we ensure that LoRA-added embeddings are always at the end of TP-sharded tensors. In other words, we shard base embeddings and LoRA embeddings separately (both padded), and place them in the same tensor. In this example, we will have the original vocab size = 1010, added vocab size = 16 and padding to 64. Therefore, the total vocab size with padding will be 1088 (because we first pad 1010 to 1024, add 16, and then pad to 1088). Therefore, the tensor format looks like the following: TP1, rank 0 (no sharding): |< βββBASEβββ >|< -BASE PADDINGβ >|< ββLORAββ >|< -LORA PADDINGβ >| corresponding token_id: | 0 | 1 | β¦ | 1009 | -1 | β¦ | -1 | 1010 | β¦ | 1015 | -1 | β¦ | -1 | index: | 0 | 1 | β¦ | 1009 | 1010 | β¦ | 1023 | 1024 | β¦ | 1039 | 1040 | β¦ | 1087 |
TP2, rank 0: |< βββββββBASEβββββββ >|< ββLORAββ >|< -LORA PADDING- >| corresponding token_id: | 0 | 1 | 2 | β¦ | 497 | 498 | β¦ | 511 | 1000 | β¦ | 1015 | -1 | β¦ | -1 | index: | 0 | 1 | 2 | β¦ | 497 | 498 | β¦ | 511 | 512 | β¦ | 527 | 520 | β¦ | 543 | TP2, rank 1: |< ββββBASEββββ >|< -BASE PADDING- >|< ββββLORA PADDINGββββ >| corresponding token_id: | 512 | 513 | 514 | β¦ | 1009 | -1 | β¦ | -1 | -1 | β¦ | -1 | -1 | β¦ | -1 | index: | 0 | 1 | 2 | β¦ | 497 | 498 | β¦ | 511 | 512 | β¦ | 519 | 520 | β¦ | 543 |
- Parameters:
num_embeddings β vocabulary size.
embedding_dim β size of hidden state.
params_dtype β type of the parameters.
org_num_embeddings β original vocabulary size (without LoRA).
padding_size β padding size for the vocabulary.
quant_config β quant config for the layer
prefix β full name of the layer in the state dict
Initialization
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- get_sharded_to_full_mapping() Optional[List[int]] [source]#
Get a mapping that can be used to reindex the gathered logits for sampling.
During sampling, we gather logits from all ranks. The relationship of index->token_id will follow the same format as outlined in the class docstring. However, after the gather, we want to reindex the final logits tensor to map index->token_id one-to-one (the index is always equal the token_id it corresponds to). The indices returned by this method allow us to do that.
- weight_loader(param: torch.nn.parameter.Parameter, loaded_weight: torch.Tensor)[source]#
- class fastvideo.v1.layers.vocab_parallel_embedding.VocabParallelEmbeddingShardIndices[source]#
Indices for a shard of a vocab parallel embedding.
- fastvideo.v1.layers.vocab_parallel_embedding.get_masked_input_and_mask(input_: torch.Tensor, org_vocab_start_index: int, org_vocab_end_index: int, num_org_vocab_padding: int, added_vocab_start_index: int, added_vocab_end_index: int) Tuple[torch.Tensor, torch.Tensor] [source]#
- fastvideo.v1.layers.vocab_parallel_embedding.pad_vocab_size(vocab_size: int, pad_to: int = DEFAULT_VOCAB_PADDING_SIZE) int [source]#
Pad the vocab size to the given value.