generated from fastai/nbdev_template
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Import missing setup_chat_format
#1862
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
kashif
approved these changes
Jul 23, 2024
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
qgallouedec
added a commit
that referenced
this pull request
Jul 28, 2024
commit 8bd2ab8 Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Sun Jul 28 14:06:19 2024 +0200 Refactor judges (#1856) * BaseJudge -> BasePairwiseJudge * hf judge asyncio * refactor judges * doc * doc * doc * memeber judge * :inherited-members: * :inherited-members: * doc * give up * judge tldr with judge class * fix rank in multithread * format * improve doc * update doc * typo doc * doc online dpo * Update judge_tldr.py --------- Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co> commit 82b07d6 Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Fri Jul 26 11:43:48 2024 +0200 Llama in modelling value head tests (#1878) commit 72bf6c2 Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Fri Jul 26 11:33:07 2024 +0200 Skip BigBird save and load test until next transformers version (#1874) commit 74e54b5 Author: Edward Beeching <edbeeching@users.noreply.github.com> Date: Fri Jul 26 09:36:25 2024 +0200 fix online dpo example (#1879) commit 3930973 Author: Rishav Dash <57321948+Rishav-hub@users.noreply.github.com> Date: Thu Jul 25 14:17:37 2024 +0530 Bug Fix while training using SFTTrainer with DataCollatorForCompletionOnlyLM (#1861) * Bug Fix while training using SFTTrainer with DataCollatorForCompletionOnlyLM Added ```dataset_text_field``` in the SFTConfig while training * Update docs/source/sft_trainer.mdx --------- Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com> commit db8e09e Author: Rishav Dash <57321948+Rishav-hub@users.noreply.github.com> Date: Thu Jul 25 14:06:57 2024 +0530 Import missing ```setup_chat_format``` (#1862) commit 1dae55f Author: elie <97572401+eliebak@users.noreply.github.com> Date: Thu Jul 25 10:27:34 2024 +0200 add fsdp_qlora config and bnb_4bit_quant_storage (#1863) commit c8cef79 Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Wed Jul 24 21:06:57 2024 +0200 arXiv to HF Papers (#1870) commit 7dcf437 Author: Kashif Rasul <kashif.rasul@gmail.com> Date: Wed Jul 24 12:27:50 2024 +0200 [online-DPO] online dpo cleanups (#1864) * online dpo cleanups * remove unused self.policy * add OnlineDPOTrainer and config to __init__.py * import from trainer * online dpo test * rename policy to model and ref_policy to ref_model * renamed internally * formatting commit 4e85bd7 Author: Costa Huang <costa.huang@outlook.com> Date: Thu Jul 18 14:35:31 2024 -0400 Online DPO and Online trainer refactor (#1809) * online dpo trainer based on rloo trainer * push changes * refactor * use `batch_generation` method * precommit * remove breakpoint() * quick refactor * push the current changes * quick change * refactor * use the config name as the experiment name * fix logging * update online DPO docs * push docs * increment global step so tensorboard works again. * precommit * remove unused common online trainer * add online DPO docs * quick refactor * push changes * Update docs/source/online_dpo_trainer.md Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> --------- Co-authored-by: Michael Noukhovitch <mnoukhov@gmail.com> Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> commit c9d5636 Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Thu Jul 18 18:28:49 2024 +0200 rm token (#1852)
kashif
pushed a commit
to claralp/trl
that referenced
this pull request
Jul 28, 2024
qgallouedec
added a commit
that referenced
this pull request
Jul 30, 2024
commit 890232f Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Tue Jul 30 14:29:47 2024 +0200 update example overview (#1883) Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co> commit 9929370 Author: Clara Pohland <54847419+claralp@users.noreply.github.com> Date: Sun Jul 28 21:10:08 2024 +0200 Move BCO to separate BCOTrainer with fixes (#1869) * kto_trainer: skip KL data for BCO * kto_trainer: BCO allow no positives or no negatives in batch * kto_trainer: make RunningMoments object serializable * add BCOTrainer * fix BCO UDM for not interleaved data * kto_trainer: remove unused UDM part * bco_trainer: add tests and docs, minor fixes * code style fixes * Update docs/source/bco_trainer.mdx Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com> * fix BCO UDM for bfloat16 * Update trl/trainer/bco_config.py * Update trl/trainer/bco_config.py Co-authored-by: Seungjae Jung <seanexplode@gmail.com> * Update trl/trainer/utils.py Co-authored-by: Seungjae Jung <seanexplode@gmail.com> * Update trl/trainer/bco_trainer.py Co-authored-by: Seungjae Jung <seanexplode@gmail.com> * Update trl/trainer/bco_config.py * Update _toctree.yml * Update trl/trainer/bco_config.py * Update trl/trainer/bco_trainer.py * RunningMoments, fix multi GPU serialization * fix tests --------- Co-authored-by: Clara Luise Pohland <clara-luise.pohland@telekom.de> Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com> Co-authored-by: Seungjae Jung <seanexplode@gmail.com> commit 6171cdd Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Sun Jul 28 15:51:38 2024 +0200 Re-add BigBird Pegasus save/load test (#1882) Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co> commit 33d2151 Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Sun Jul 28 15:07:10 2024 +0200 Re-add BigBird Pegasus save/load test (#1876) * skip bigbird in ci * readd big bird test * pytest parametrize * dont check the version * rm model name * re add big bird * Merge branch 'main' into readd-bigbird-save-load-test --------- Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co> commit 8bd2ab8 Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Sun Jul 28 14:06:19 2024 +0200 Refactor judges (#1856) * BaseJudge -> BasePairwiseJudge * hf judge asyncio * refactor judges * doc * doc * doc * memeber judge * :inherited-members: * :inherited-members: * doc * give up * judge tldr with judge class * fix rank in multithread * format * improve doc * update doc * typo doc * doc online dpo * Update judge_tldr.py --------- Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co> commit 82b07d6 Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Fri Jul 26 11:43:48 2024 +0200 Llama in modelling value head tests (#1878) commit 72bf6c2 Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Fri Jul 26 11:33:07 2024 +0200 Skip BigBird save and load test until next transformers version (#1874) commit 74e54b5 Author: Edward Beeching <edbeeching@users.noreply.github.com> Date: Fri Jul 26 09:36:25 2024 +0200 fix online dpo example (#1879) commit 3930973 Author: Rishav Dash <57321948+Rishav-hub@users.noreply.github.com> Date: Thu Jul 25 14:17:37 2024 +0530 Bug Fix while training using SFTTrainer with DataCollatorForCompletionOnlyLM (#1861) * Bug Fix while training using SFTTrainer with DataCollatorForCompletionOnlyLM Added ```dataset_text_field``` in the SFTConfig while training * Update docs/source/sft_trainer.mdx --------- Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com> commit db8e09e Author: Rishav Dash <57321948+Rishav-hub@users.noreply.github.com> Date: Thu Jul 25 14:06:57 2024 +0530 Import missing ```setup_chat_format``` (#1862) commit 1dae55f Author: elie <97572401+eliebak@users.noreply.github.com> Date: Thu Jul 25 10:27:34 2024 +0200 add fsdp_qlora config and bnb_4bit_quant_storage (#1863) commit c8cef79 Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Wed Jul 24 21:06:57 2024 +0200 arXiv to HF Papers (#1870) commit 7dcf437 Author: Kashif Rasul <kashif.rasul@gmail.com> Date: Wed Jul 24 12:27:50 2024 +0200 [online-DPO] online dpo cleanups (#1864) * online dpo cleanups * remove unused self.policy * add OnlineDPOTrainer and config to __init__.py * import from trainer * online dpo test * rename policy to model and ref_policy to ref_model * renamed internally * formatting commit 4e85bd7 Author: Costa Huang <costa.huang@outlook.com> Date: Thu Jul 18 14:35:31 2024 -0400 Online DPO and Online trainer refactor (#1809) * online dpo trainer based on rloo trainer * push changes * refactor * use `batch_generation` method * precommit * remove breakpoint() * quick refactor * push the current changes * quick change * refactor * use the config name as the experiment name * fix logging * update online DPO docs * push docs * increment global step so tensorboard works again. * precommit * remove unused common online trainer * add online DPO docs * quick refactor * push changes * Update docs/source/online_dpo_trainer.md Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> --------- Co-authored-by: Michael Noukhovitch <mnoukhov@gmail.com> Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> commit c9d5636 Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Thu Jul 18 18:28:49 2024 +0200 rm token (#1852)
qgallouedec
added a commit
that referenced
this pull request
Aug 2, 2024
* fix vsft example commands * fix use_cache and get tokenizer from processor * rm unused AutoTokenizer * Squashed commit of the following: commit 8bd2ab8 Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Sun Jul 28 14:06:19 2024 +0200 Refactor judges (#1856) * BaseJudge -> BasePairwiseJudge * hf judge asyncio * refactor judges * doc * doc * doc * memeber judge * :inherited-members: * :inherited-members: * doc * give up * judge tldr with judge class * fix rank in multithread * format * improve doc * update doc * typo doc * doc online dpo * Update judge_tldr.py --------- Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co> commit 82b07d6 Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Fri Jul 26 11:43:48 2024 +0200 Llama in modelling value head tests (#1878) commit 72bf6c2 Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Fri Jul 26 11:33:07 2024 +0200 Skip BigBird save and load test until next transformers version (#1874) commit 74e54b5 Author: Edward Beeching <edbeeching@users.noreply.github.com> Date: Fri Jul 26 09:36:25 2024 +0200 fix online dpo example (#1879) commit 3930973 Author: Rishav Dash <57321948+Rishav-hub@users.noreply.github.com> Date: Thu Jul 25 14:17:37 2024 +0530 Bug Fix while training using SFTTrainer with DataCollatorForCompletionOnlyLM (#1861) * Bug Fix while training using SFTTrainer with DataCollatorForCompletionOnlyLM Added ```dataset_text_field``` in the SFTConfig while training * Update docs/source/sft_trainer.mdx --------- Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com> commit db8e09e Author: Rishav Dash <57321948+Rishav-hub@users.noreply.github.com> Date: Thu Jul 25 14:06:57 2024 +0530 Import missing ```setup_chat_format``` (#1862) commit 1dae55f Author: elie <97572401+eliebak@users.noreply.github.com> Date: Thu Jul 25 10:27:34 2024 +0200 add fsdp_qlora config and bnb_4bit_quant_storage (#1863) commit c8cef79 Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Wed Jul 24 21:06:57 2024 +0200 arXiv to HF Papers (#1870) commit 7dcf437 Author: Kashif Rasul <kashif.rasul@gmail.com> Date: Wed Jul 24 12:27:50 2024 +0200 [online-DPO] online dpo cleanups (#1864) * online dpo cleanups * remove unused self.policy * add OnlineDPOTrainer and config to __init__.py * import from trainer * online dpo test * rename policy to model and ref_policy to ref_model * renamed internally * formatting commit 4e85bd7 Author: Costa Huang <costa.huang@outlook.com> Date: Thu Jul 18 14:35:31 2024 -0400 Online DPO and Online trainer refactor (#1809) * online dpo trainer based on rloo trainer * push changes * refactor * use `batch_generation` method * precommit * remove breakpoint() * quick refactor * push the current changes * quick change * refactor * use the config name as the experiment name * fix logging * update online DPO docs * push docs * increment global step so tensorboard works again. * precommit * remove unused common online trainer * add online DPO docs * quick refactor * push changes * Update docs/source/online_dpo_trainer.md Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> --------- Co-authored-by: Michael Noukhovitch <mnoukhov@gmail.com> Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> commit c9d5636 Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Thu Jul 18 18:28:49 2024 +0200 rm token (#1852) * add section in doc * Squashed commit of the following: commit 890232f Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Tue Jul 30 14:29:47 2024 +0200 update example overview (#1883) Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co> commit 9929370 Author: Clara Pohland <54847419+claralp@users.noreply.github.com> Date: Sun Jul 28 21:10:08 2024 +0200 Move BCO to separate BCOTrainer with fixes (#1869) * kto_trainer: skip KL data for BCO * kto_trainer: BCO allow no positives or no negatives in batch * kto_trainer: make RunningMoments object serializable * add BCOTrainer * fix BCO UDM for not interleaved data * kto_trainer: remove unused UDM part * bco_trainer: add tests and docs, minor fixes * code style fixes * Update docs/source/bco_trainer.mdx Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com> * fix BCO UDM for bfloat16 * Update trl/trainer/bco_config.py * Update trl/trainer/bco_config.py Co-authored-by: Seungjae Jung <seanexplode@gmail.com> * Update trl/trainer/utils.py Co-authored-by: Seungjae Jung <seanexplode@gmail.com> * Update trl/trainer/bco_trainer.py Co-authored-by: Seungjae Jung <seanexplode@gmail.com> * Update trl/trainer/bco_config.py * Update _toctree.yml * Update trl/trainer/bco_config.py * Update trl/trainer/bco_trainer.py * RunningMoments, fix multi GPU serialization * fix tests --------- Co-authored-by: Clara Luise Pohland <clara-luise.pohland@telekom.de> Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com> Co-authored-by: Seungjae Jung <seanexplode@gmail.com> commit 6171cdd Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Sun Jul 28 15:51:38 2024 +0200 Re-add BigBird Pegasus save/load test (#1882) Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co> commit 33d2151 Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Sun Jul 28 15:07:10 2024 +0200 Re-add BigBird Pegasus save/load test (#1876) * skip bigbird in ci * readd big bird test * pytest parametrize * dont check the version * rm model name * re add big bird * Merge branch 'main' into readd-bigbird-save-load-test --------- Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co> commit 8bd2ab8 Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Sun Jul 28 14:06:19 2024 +0200 Refactor judges (#1856) * BaseJudge -> BasePairwiseJudge * hf judge asyncio * refactor judges * doc * doc * doc * memeber judge * :inherited-members: * :inherited-members: * doc * give up * judge tldr with judge class * fix rank in multithread * format * improve doc * update doc * typo doc * doc online dpo * Update judge_tldr.py --------- Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co> commit 82b07d6 Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Fri Jul 26 11:43:48 2024 +0200 Llama in modelling value head tests (#1878) commit 72bf6c2 Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Fri Jul 26 11:33:07 2024 +0200 Skip BigBird save and load test until next transformers version (#1874) commit 74e54b5 Author: Edward Beeching <edbeeching@users.noreply.github.com> Date: Fri Jul 26 09:36:25 2024 +0200 fix online dpo example (#1879) commit 3930973 Author: Rishav Dash <57321948+Rishav-hub@users.noreply.github.com> Date: Thu Jul 25 14:17:37 2024 +0530 Bug Fix while training using SFTTrainer with DataCollatorForCompletionOnlyLM (#1861) * Bug Fix while training using SFTTrainer with DataCollatorForCompletionOnlyLM Added ```dataset_text_field``` in the SFTConfig while training * Update docs/source/sft_trainer.mdx --------- Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com> commit db8e09e Author: Rishav Dash <57321948+Rishav-hub@users.noreply.github.com> Date: Thu Jul 25 14:06:57 2024 +0530 Import missing ```setup_chat_format``` (#1862) commit 1dae55f Author: elie <97572401+eliebak@users.noreply.github.com> Date: Thu Jul 25 10:27:34 2024 +0200 add fsdp_qlora config and bnb_4bit_quant_storage (#1863) commit c8cef79 Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Wed Jul 24 21:06:57 2024 +0200 arXiv to HF Papers (#1870) commit 7dcf437 Author: Kashif Rasul <kashif.rasul@gmail.com> Date: Wed Jul 24 12:27:50 2024 +0200 [online-DPO] online dpo cleanups (#1864) * online dpo cleanups * remove unused self.policy * add OnlineDPOTrainer and config to __init__.py * import from trainer * online dpo test * rename policy to model and ref_policy to ref_model * renamed internally * formatting commit 4e85bd7 Author: Costa Huang <costa.huang@outlook.com> Date: Thu Jul 18 14:35:31 2024 -0400 Online DPO and Online trainer refactor (#1809) * online dpo trainer based on rloo trainer * push changes * refactor * use `batch_generation` method * precommit * remove breakpoint() * quick refactor * push the current changes * quick change * refactor * use the config name as the experiment name * fix logging * update online DPO docs * push docs * increment global step so tensorboard works again. * precommit * remove unused common online trainer * add online DPO docs * quick refactor * push changes * Update docs/source/online_dpo_trainer.md Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> --------- Co-authored-by: Michael Noukhovitch <mnoukhov@gmail.com> Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> commit c9d5636 Author: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com> Date: Thu Jul 18 18:28:49 2024 +0200 rm token (#1852) * simplify script * doc * use traning args * args instead of trianing args * fix doc * drop eval * rm eval section * re-add bigbirg --------- Co-authored-by: Quentin Gallouédec <quentin.gallouedec@huggingface.co>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.