Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minor MPT-7B fixes and creation script update #6982

Merged
merged 3 commits into from
Jul 11, 2023

Conversation

trias702
Copy link
Collaborator

@trias702 trias702 commented Jul 5, 2023

What does this PR do ?

Updates the script used to create a Megatron MPT-7B checkpoint with fixes, and adds support to MegatronBaseModel for special tokens to be passed at runtime.

Collection: NLP

Changelog

  • Updates the Megatron MPT-7B creation script with some bug fixes and a config file baseline
  • Adds support for HuggingFace special tokens to be declared in Hydra/Yaml and passed to MegatronBaseModel

Usage

You can specify the new tokens in a config YAML like so:

...
model.tokenizer.special_tokens.pad_token="<PAD>"
model.tokenizer.special_tokens.mask_token="<MASK>"
...

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

The MegatronBaseModel changes are necessary because the HuggingFace tokenizer which is used by MPT-7B (and indeed all other HF tokenizers) does not have special tokens built in, tokens such as PAD and MASK. This leads to a crash when prompt-learning with MPT-7B, because there is no pad token, and no way to add one at runtime. With this fix to MegatronBaseModel, you can now pass a pad token at runtime, which allows prompt-learning with MPT-7B (or any model which uses a HF tokenizer).

Signed-off-by: Daniel Egert <degert@nvidia.com>
@github-actions github-actions bot added the NLP label Jul 5, 2023
@trias702 trias702 requested a review from ericharper July 5, 2023 23:12
Copy link
Collaborator

@ericharper ericharper left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks!

@ericharper ericharper merged commit 0f79a9f into NVIDIA:main Jul 11, 2023
gshennvm pushed a commit that referenced this pull request Jul 12, 2023
* Initial commit of minor MPT-7B fixes

Signed-off-by: Daniel Egert <degert@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Daniel Egert <degert@nvidia.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Gerald Shen <geshen@nvidia.com>
gshennvm pushed a commit that referenced this pull request Jul 12, 2023
* Initial commit of minor MPT-7B fixes

Signed-off-by: Daniel Egert <degert@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Daniel Egert <degert@nvidia.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Gerald Shen <geshen@nvidia.com>
ericharper added a commit that referenced this pull request Jul 13, 2023
* Add end_strings to SamplingParams

Signed-off-by: Gerald Shen <geshen@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: Gerald Shen <geshen@nvidia.com>

* Add end_strings to megatron_gpt_inference.yaml

Signed-off-by: Gerald Shen <geshen@nvidia.com>

* Add end_strings to sampling params

Signed-off-by: Gerald Shen <geshen@nvidia.com>

* Remove extra_id_1 from default end_strings

Signed-off-by: Gerald Shen <geshen@nvidia.com>

* Fix require_grad typos (#6930)

Signed-off-by: Sergii Dymchenko <sdym@fb.com>
Signed-off-by: Gerald Shen <geshen@nvidia.com>

* fix syntax error

Signed-off-by: Gerald Shen <geshen@nvidia.com>

* fix the mpt chatbot (#6957) (#6968)

Signed-off-by: Yi Dong <yidong@nvidia.com>
Co-authored-by: Yi Dong <43824965+yidong72@users.noreply.github.com>
Signed-off-by: Gerald Shen <geshen@nvidia.com>

* add support for max_total_length=4096 for 43b (#6763)

* add support for max_total_length=4096 for 43b

Signed-off-by: Zhilin Wang <wangzhilin12061996@hotmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Zhilin Wang <wangzhilin12061996@hotmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Gerald Shen <geshen@nvidia.com>

* rnnt_greedy_decoding.py: typos? auto-repressively -> auto-regressively (#6989)

Signed-off-by: Vadim Kantorov <vadimkantorov@gmail.com>
Signed-off-by: Gerald Shen <geshen@nvidia.com>

* Cache handling without input tensors mutation (#6980) (#6996)

* Cache handling without input tensors mutation



* Cleanup



* Cleanup#2



* Cleanup#3



---------

Signed-off-by: Boris Fomitchev <bfomitchev@nvidia.com>
Co-authored-by: Boris Fomitchev <borisfom@users.noreply.github.com>
Co-authored-by: Somshubra Majumdar <titu1994@gmail.com>
Signed-off-by: Gerald Shen <geshen@nvidia.com>

* Hybrid conformer export (#6983) (#6995)

* Implemented generic kv-pair setting of export_config from args



* Hybrid conformer export



* Hybrid decoder export



* Cleanup



* Changed from **kwargs



* Docstring



* Docs added



* Stringify args



* Added docs for ASR export configs



* lowercase ctc



---------

Signed-off-by: Boris Fomitchev <bfomitchev@nvidia.com>
Co-authored-by: Boris Fomitchev <borisfom@users.noreply.github.com>
Signed-off-by: Gerald Shen <geshen@nvidia.com>

* Fixing an issue with confidence ensembles (#6987) (#7004)

* Bug fix for the confidence ensembles



* Relax constraints for the test



---------

Signed-off-by: Igor Gitman <igitman@nvidia.com>
Co-authored-by: Igor Gitman <igitman@nvidia.com>
Signed-off-by: Gerald Shen <geshen@nvidia.com>

* [TTS] Add cosine distance option to TTS aligner (#6806)

* [TTS] Add cosine distance option to TTS aligner

Signed-off-by: Ryan <rlangman@nvidia.com>

* [TTS] Update aligner comments

Signed-off-by: Ryan <rlangman@nvidia.com>

---------

Signed-off-by: Ryan <rlangman@nvidia.com>
Signed-off-by: Gerald Shen <geshen@nvidia.com>

* Minor MPT-7B fixes and creation script update (#6982)

* Initial commit of minor MPT-7B fixes

Signed-off-by: Daniel Egert <degert@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Daniel Egert <degert@nvidia.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Gerald Shen <geshen@nvidia.com>

* Change Jenkins timeout (#6997)

* change timeout

Signed-off-by: ericharper <complex451@gmail.com>

* change to 8 hours

Signed-off-by: ericharper <complex451@gmail.com>

---------

Signed-off-by: ericharper <complex451@gmail.com>
Signed-off-by: Gerald Shen <geshen@nvidia.com>

* remove hard coded input and output fields (#7008)

* remove hard coded input and output fields

Signed-off-by: arendu <adithya.r@gmail.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: arendu <adithya.r@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: Gerald Shen <geshen@nvidia.com>

* RoPE length extrapolation with interpolation (#7005)

* Push changes

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Fixes

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* add continue training script

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* [WIP] nonlinear interp

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Fix

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* override encoder_seq_len

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Remove nonlinear

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* sft with pi (#7006)

* sft with pi

Signed-off-by: Evelina <ebakhturina@nvidia.com>

* update values only if not None"

Signed-off-by: Evelina <ebakhturina@nvidia.com>

---------

Signed-off-by: Evelina <ebakhturina@nvidia.com>

* Address comments

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Add info

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* Empty

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

---------

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
Signed-off-by: Evelina <ebakhturina@nvidia.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Evelina <10428420+ekmb@users.noreply.github.com>
Signed-off-by: Gerald Shen <geshen@nvidia.com>

* use proper config

Signed-off-by: Gerald Shen <geshen@nvidia.com>

* Add end_strings to SamplingParams

Signed-off-by: Gerald Shen <geshen@nvidia.com>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: Gerald Shen <geshen@nvidia.com>

* Add end_strings to megatron_gpt_inference.yaml

Signed-off-by: Gerald Shen <geshen@nvidia.com>

* Add end_strings to sampling params

Signed-off-by: Gerald Shen <geshen@nvidia.com>

* Remove extra_id_1 from default end_strings

Signed-off-by: Gerald Shen <geshen@nvidia.com>

* fix syntax error

Signed-off-by: Gerald Shen <geshen@nvidia.com>

* use proper config

Signed-off-by: Gerald Shen <geshen@nvidia.com>

---------

Signed-off-by: Gerald Shen <geshen@nvidia.com>
Signed-off-by: Sergii Dymchenko <sdym@fb.com>
Signed-off-by: Yi Dong <yidong@nvidia.com>
Signed-off-by: Zhilin Wang <wangzhilin12061996@hotmail.com>
Signed-off-by: Vadim Kantorov <vadimkantorov@gmail.com>
Signed-off-by: Boris Fomitchev <bfomitchev@nvidia.com>
Signed-off-by: Igor Gitman <igitman@nvidia.com>
Signed-off-by: Ryan <rlangman@nvidia.com>
Signed-off-by: Daniel Egert <degert@nvidia.com>
Signed-off-by: ericharper <complex451@gmail.com>
Signed-off-by: arendu <adithya.r@gmail.com>
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
Signed-off-by: Evelina <ebakhturina@nvidia.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Sergii Dymchenko <kit1980@gmail.com>
Co-authored-by: Gerald Shen <geshen@nvidia.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Yi Dong <43824965+yidong72@users.noreply.github.com>
Co-authored-by: Zhilin Wang <wangzhilin12061996@hotmail.com>
Co-authored-by: Vadim Kantorov <vadimkantorov@gmail.com>
Co-authored-by: Boris Fomitchev <borisfom@users.noreply.github.com>
Co-authored-by: Somshubra Majumdar <titu1994@gmail.com>
Co-authored-by: Igor Gitman <igitman@nvidia.com>
Co-authored-by: Ryan Langman <rlangman@nvidia.com>
Co-authored-by: trias702 <25867060+trias702@users.noreply.github.com>
Co-authored-by: Eric Harper <complex451@gmail.com>
Co-authored-by: Adi Renduchintala <adithyare@nvidia.com>
Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca>
Co-authored-by: Evelina <10428420+ekmb@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants