vocab_parallel_embedding
¶
Classes¶
fastvideo.layers.vocab_parallel_embedding.UnquantizedEmbeddingMethod
¶
Bases: QuantizeMethodBase
Unquantized method for embeddings.
Functions¶
fastvideo.layers.vocab_parallel_embedding.UnquantizedEmbeddingMethod.create_weights
¶
create_weights(layer: Module, input_size_per_partition: int, output_partition_sizes: list[int], input_size: int, output_size: int, params_dtype: dtype, **extra_weight_attrs)
Create weights for embedding layer.
Source code in fastvideo/layers/vocab_parallel_embedding.py
fastvideo.layers.vocab_parallel_embedding.VocabParallelEmbedding
¶
VocabParallelEmbedding(num_embeddings: int, embedding_dim: int, params_dtype: dtype | None = None, org_num_embeddings: int | None = None, padding_size: int = DEFAULT_VOCAB_PADDING_SIZE, quant_config: QuantizationConfig | None = None, prefix: str = '')
Bases: Module
Embedding parallelized in the vocabulary dimension.
Adapted from torch.nn.Embedding, note that we pad the vocabulary size to make sure it is divisible by the number of model parallel GPUs.
In order to support various loading methods, we ensure that LoRA-added embeddings are always at the end of TP-sharded tensors. In other words, we shard base embeddings and LoRA embeddings separately (both padded), and place them in the same tensor. In this example, we will have the original vocab size = 1010, added vocab size = 16 and padding to 64. Therefore, the total vocab size with padding will be 1088 (because we first pad 1010 to 1024, add 16, and then pad to 1088). Therefore, the tensor format looks like the following: TP1, rank 0 (no sharding): |< --------BASE-------- >|< -BASE PADDING-- >|< -----LORA------ >|< -LORA PADDING-- >| corresponding token_id: | 0 | 1 | ... | 1009 | -1 | ... | -1 | 1010 | ... | 1015 | -1 | ... | -1 | index: | 0 | 1 | ... | 1009 | 1010 | ... | 1023 | 1024 | ... | 1039 | 1040 | ... | 1087 |
TP2, rank 0: |< --------------------BASE--------------------- >|< -----LORA------ >|< -LORA PADDING- >| corresponding token_id: | 0 | 1 | 2 | ... | 497 | 498 | ... | 511 | 1000 | ... | 1015 | -1 | ... | -1 | index: | 0 | 1 | 2 | ... | 497 | 498 | ... | 511 | 512 | ... | 527 | 520 | ... | 543 | TP2, rank 1: |< -----------BASE----------- >|< -BASE PADDING- >|< -----------LORA PADDING----------- >| corresponding token_id: | 512 | 513 | 514 | ... | 1009 | -1 | ... | -1 | -1 | ... | -1 | -1 | ... | -1 | index: | 0 | 1 | 2 | ... | 497 | 498 | ... | 511 | 512 | ... | 519 | 520 | ... | 543 |
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
num_embeddings
|
int
|
vocabulary size. |
required |
embedding_dim
|
int
|
size of hidden state. |
required |
params_dtype
|
dtype | None
|
type of the parameters. |
None
|
org_num_embeddings
|
int | None
|
original vocabulary size (without LoRA). |
None
|
padding_size
|
int
|
padding size for the vocabulary. |
DEFAULT_VOCAB_PADDING_SIZE
|
quant_config
|
QuantizationConfig | None
|
quant config for the layer |
None
|
prefix
|
str
|
full name of the layer in the state dict |
''
|
Source code in fastvideo/layers/vocab_parallel_embedding.py
Functions¶
fastvideo.layers.vocab_parallel_embedding.VocabParallelEmbedding.get_sharded_to_full_mapping
¶
Get a mapping that can be used to reindex the gathered logits for sampling.
During sampling, we gather logits from all ranks. The relationship of index->token_id will follow the same format as outlined in the class docstring. However, after the gather, we want to reindex the final logits tensor to map index->token_id one-to-one (the index is always equal the token_id it corresponds to). The indices returned by this method allow us to do that.
Source code in fastvideo/layers/vocab_parallel_embedding.py
fastvideo.layers.vocab_parallel_embedding.VocabParallelEmbeddingShardIndices
dataclass
¶
VocabParallelEmbeddingShardIndices(padded_org_vocab_start_index: int, padded_org_vocab_end_index: int, padded_added_vocab_start_index: int, padded_added_vocab_end_index: int, org_vocab_start_index: int, org_vocab_end_index: int, added_vocab_start_index: int, added_vocab_end_index: int)
Indices for a shard of a vocab parallel embedding.