Skip to content

Commit

Permalink
FIX Correctly determine word embeddings on Deberta (#2257)
Browse files Browse the repository at this point in the history
After a recent change in
transformers (huggingface/transformers#22105),
PEFT could no longer determine the word embeddings from Deberta. This PR
provides a very minimal fix that correctly determines the word
embeddings again.

Details

Previously, the word embeddings were determined in the following manner:

1. Find the transformers_backbone by checking the base model's children
for PreTrainedModel instances
2. If not found, the model itself is considered the transformers
backbone.
3. On the backbone, check for modules whose weight has the same size as
the vocab size. This module is now assumed to be the word embeddings.

Before the mentioned transformers PR, 1. did not find anything, so 2.
was applied. After the PR, however, the DebertaEncoder is now an
instance of PreTrainedModel (asked internally, this is intended).
Therefore, the encoder is now considered the transformer backbone. But
the encoder does not have the word embeddings attribute, therefore step
3. fails.

The fix of this PR is to first explicitly check for
model.embeddings.word_embeddings and if this attribute is found, use it
as the word embeddings. Only when it's not found do we use the other
method described above. This way, we can successfully determine the word
embeddings on models like Deberta.

This whole code is a bit messy and could probably be improved. However,
changing the logic too much could inadvertently break for some existing
models that are not included in the tests. Therefore, I chose this
method which leaves the existing logic mostly intact.
  • Loading branch information
BenjaminBossan authored Dec 4, 2024
1 parent c057589 commit f86522e
Showing 1 changed file with 27 additions and 14 deletions.
41 changes: 27 additions & 14 deletions src/peft/peft_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -630,20 +630,33 @@ def _setup_prompt_encoder(self, adapter_name: str):
if config.num_transformer_submodules is None:
config.num_transformer_submodules = 2 if config.task_type == TaskType.SEQ_2_SEQ_LM else 1

for named_param, value in list(transformer_backbone.named_parameters()):
# for ZeRO-3, the tensor is sharded across accelerators and deepspeed modifies it to a tensor with shape [0]
# the actual unsharded shape is stored in "ds_shape" attribute
# special handling is needed in case the model is initialized in deepspeed.zero.Init() context or HfDeepSpeedConfig
# has been called before
# For reference refer to issue: https://github.com/huggingface/peft/issues/996
deepspeed_distributed_tensor_shape = getattr(value, "ds_shape", None)

if value.shape[0] == self.base_model.config.vocab_size or (
deepspeed_distributed_tensor_shape is not None
and deepspeed_distributed_tensor_shape[0] == self.base_model.config.vocab_size
):
self.word_embeddings = transformer_backbone.get_submodule(named_param.replace(".weight", ""))
break
# determine the word embeddings
word_embeddings = None
try:
# First try to find the word embeddings based on the module name, this should work for models like Bert,
# Roberta, Deberta, etc.
word_embeddings = self.base_model.get_submodule("embeddings.word_embeddings")
except AttributeError:
pass

if word_embeddings is None:
# Word embeddings could not be determined. Next try to guess them by checking which parameter has the size
# of the vocab.
for named_param, value in list(transformer_backbone.named_parameters()):
# for ZeRO-3, the tensor is sharded across accelerators and deepspeed modifies it to a tensor with shape
# [0] the actual unsharded shape is stored in "ds_shape" attribute special handling is needed in case
# the model is initialized in deepspeed.zero.Init() context or HfDeepSpeedConfig has been called before
# For reference refer to issue: https://github.com/huggingface/peft/issues/996
deepspeed_distributed_tensor_shape = getattr(value, "ds_shape", None)

if value.shape[0] == self.base_model.config.vocab_size or (
deepspeed_distributed_tensor_shape is not None
and deepspeed_distributed_tensor_shape[0] == self.base_model.config.vocab_size
):
word_embeddings = transformer_backbone.get_submodule(named_param.replace(".weight", ""))
break

self.word_embeddings = word_embeddings

if config.peft_type == PeftType.PROMPT_TUNING:
prompt_encoder = PromptEmbedding(config, self.word_embeddings)
Expand Down

0 comments on commit f86522e

Please sign in to comment.