-
Notifications
You must be signed in to change notification settings - Fork 248
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
any plans to add improved PP in llama-3? #483
Comments
Hi @SeunghyunSEO, the For |
H-Huang
added a commit
that referenced
this issue
Jul 30, 2024
fixes #483 `python test_runner.py ./out --test pp_looped_flexible_1f1b` [ghstack-poisoned]
H-Huang
added a commit
that referenced
this issue
Jul 30, 2024
fixes #483 `python test_runner.py ./out --test pp_looped_flexible_1f1b` [ghstack-poisoned]
H-Huang
added a commit
that referenced
this issue
Jul 30, 2024
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #490 fixes #483 `python test_runner.py ./out --test pp_looped_flexible_1f1b`
merged in 9cf4b2f |
tianyu-l
pushed a commit
that referenced
this issue
Aug 16, 2024
fixes #483 `python test_runner.py ./out --test pp_looped_flexible_1f1b` [ghstack-poisoned]
tianyu-l
pushed a commit
that referenced
this issue
Aug 16, 2024
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #490 fixes #483 `python test_runner.py ./out --test pp_looped_flexible_1f1b`
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
batch constraint removed Pipeline Parallelism (PP) in llama-3 paper seems to added torch 2.4.
do you have any plans to add this feature to torchtitan ?!
The text was updated successfully, but these errors were encountered: