Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade Transformers to v4.42.x #719

Merged
merged 8 commits into from
Jul 27, 2024
Merged

Upgrade Transformers to v4.42.x #719

merged 8 commits into from
Jul 27, 2024

Conversation

calpt
Copy link
Member

@calpt calpt commented Jul 13, 2024

Changes needed for sync:

  • remove setting _hf_peft_config_loaded for HF Trainer
  • fix BEiT interpolate_pos_encoding
  • add sdpa to GPT-2
  • add LlamaForTokenClassification head conversion
  • copy changes to Mistral implementation

@calpt calpt added the sync label Jul 13, 2024
lenglaender and others added 5 commits July 16, 2024 17:21
If this should results in any new errors, put the line back in and set `self._hf_peft_config_loaded = False` in the `save_pretrained` function
@calpt calpt marked this pull request as ready for review July 20, 2024 18:12
@calpt calpt merged commit 1a7d24e into adapter-hub:main Jul 27, 2024
4 checks passed
@calpt calpt deleted the sync/v4.42.x branch August 4, 2024 17:25
dainis-boumber added a commit to ReDASers/adapters that referenced this pull request Aug 30, 2024
Changes needed for sync:
- remove setting `_hf_peft_config_loaded` for HF Trainer
- fix BEiT interpolate_pos_encoding
- add sdpa to GPT-2
- add LlamaForTokenClassification head conversion
- copy changes to Mistral implementation

---------

Co-authored-by: Leon Engländer <leon.englaender@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants