Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Use GPTModel from mcore (NVIDIA#7093)
* start adding gpt from megatron core path Signed-off-by: ericharper <complex451@gmail.com> * set model parallel config Signed-off-by: ericharper <complex451@gmail.com> * use model parallel config object Signed-off-by: ericharper <complex451@gmail.com> * update args Signed-off-by: ericharper <complex451@gmail.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * set vp size to none if it is 1 Signed-off-by: ericharper <complex451@gmail.com> * set vp size to none if it is 1 Signed-off-by: ericharper <complex451@gmail.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add TransformerConfig Signed-off-by: ericharper <complex451@gmail.com> * start updating to TransformerConfig Signed-off-by: ericharper <complex451@gmail.com> * add todo Signed-off-by: ericharper <complex451@gmail.com> * revert to model parallel config Signed-off-by: ericharper <complex451@gmail.com> * add hidden_size to model_parallel_config Signed-off-by: ericharper <complex451@gmail.com> * remove imports Signed-off-by: ericharper <complex451@gmail.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * remove import Signed-off-by: ericharper <complex451@gmail.com> * small clean up Signed-off-by: ericharper <complex451@gmail.com> * update hidden size in peft base model, add mcore commit to jenkins Signed-off-by: ericharper <complex451@gmail.com> * update module args Signed-off-by: ericharper <complex451@gmail.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add config obj to flash attention tests Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * remove sequence parallel arg Signed-off-by: ericharper <complex451@gmail.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * update args Signed-off-by: ericharper <complex451@gmail.com> * add config to self Signed-off-by: ericharper <complex451@gmail.com> * update args Signed-off-by: ericharper <complex451@gmail.com> * update args Signed-off-by: ericharper <complex451@gmail.com> * update args Signed-off-by: ericharper <complex451@gmail.com> * add config to test Signed-off-by: ericharper <complex451@gmail.com> * get hidden_size from config Signed-off-by: ericharper <complex451@gmail.com> * add try except Signed-off-by: ericharper <complex451@gmail.com> * use default Signed-off-by: ericharper <complex451@gmail.com> * update config with hidden size Signed-off-by: ericharper <complex451@gmail.com> * remove arg Signed-off-by: ericharper <complex451@gmail.com> * comment out jenkins test Signed-off-by: ericharper <complex451@gmail.com> * revert import Signed-off-by: ericharper <complex451@gmail.com> * remove optimizer_idx Signed-off-by: eharper <eharper@nvidia.com> * prefetch num microbatches Signed-off-by: eharper <eharper@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * start adding gpt from megatron core path Signed-off-by: ericharper <complex451@gmail.com> * set model parallel config Signed-off-by: ericharper <complex451@gmail.com> * use model parallel config object Signed-off-by: ericharper <complex451@gmail.com> * update args Signed-off-by: ericharper <complex451@gmail.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * start updating to TransformerConfig Signed-off-by: ericharper <complex451@gmail.com> * revert to model parallel config Signed-off-by: ericharper <complex451@gmail.com> * add hidden_size to model_parallel_config Signed-off-by: ericharper <complex451@gmail.com> * remove imports Signed-off-by: ericharper <complex451@gmail.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * update module args Signed-off-by: ericharper <complex451@gmail.com> * add config to self Signed-off-by: ericharper <complex451@gmail.com> * build transformer config Signed-off-by: ericharper <complex451@gmail.com> * add model to provider func Signed-off-by: ericharper <complex451@gmail.com> * update forward and float16 wrapper Signed-off-by: ericharper <complex451@gmail.com> * instantiate model parallel config after init model parallel Signed-off-by: ericharper <complex451@gmail.com> * set virtual rank Signed-off-by: ericharper <complex451@gmail.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add GQA config to megatron gpt model (NVIDIA#7096) * Add GQA config in gpt config file Signed-off-by: jasonwan <jasonwan@nvidia.com> * Verify mcore is enabled when using GQA Signed-off-by: jasonwan <jasonwan@nvidia.com> --------- Signed-off-by: jasonwan <jasonwan@nvidia.com> * revert Signed-off-by: ericharper <complex451@gmail.com> * remove import Signed-off-by: eharper <eharper@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * update for dist adam Signed-off-by: eharper <eharper@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * use get_gpt_module_list Signed-off-by: eharper <eharper@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * update megatron core commit Signed-off-by: eharper <eharper@nvidia.com> * revert change Signed-off-by: eharper <eharper@nvidia.com> * remove import Signed-off-by: eharper <eharper@nvidia.com> * remove import Signed-off-by: eharper <eharper@nvidia.com> * remove import Signed-off-by: eharper <eharper@nvidia.com> --------- Signed-off-by: ericharper <complex451@gmail.com> Signed-off-by: eharper <eharper@nvidia.com> Signed-off-by: jasonwan <jasonwan@nvidia.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Jason Wang <jasonwan@nvidia.com>
- Loading branch information