Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
FIX Correctly determine word embeddings on Deberta (#2257)
After a recent change in transformers (huggingface/transformers#22105), PEFT could no longer determine the word embeddings from Deberta. This PR provides a very minimal fix that correctly determines the word embeddings again. Details Previously, the word embeddings were determined in the following manner: 1. Find the transformers_backbone by checking the base model's children for PreTrainedModel instances 2. If not found, the model itself is considered the transformers backbone. 3. On the backbone, check for modules whose weight has the same size as the vocab size. This module is now assumed to be the word embeddings. Before the mentioned transformers PR, 1. did not find anything, so 2. was applied. After the PR, however, the DebertaEncoder is now an instance of PreTrainedModel (asked internally, this is intended). Therefore, the encoder is now considered the transformer backbone. But the encoder does not have the word embeddings attribute, therefore step 3. fails. The fix of this PR is to first explicitly check for model.embeddings.word_embeddings and if this attribute is found, use it as the word embeddings. Only when it's not found do we use the other method described above. This way, we can successfully determine the word embeddings on models like Deberta. This whole code is a bit messy and could probably be improved. However, changing the logic too much could inadvertently break for some existing models that are not included in the tests. Therefore, I chose this method which leaves the existing logic mostly intact.
- Loading branch information