-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimizing data parallel Fuse-Allreduce-Overlapping #48092
Merged
JZ-LIANG
merged 58 commits into
PaddlePaddle:develop
from
JZ-LIANG:AutoParallel/newexe-dp
Nov 29, 2022
Merged
Optimizing data parallel Fuse-Allreduce-Overlapping #48092
JZ-LIANG
merged 58 commits into
PaddlePaddle:develop
from
JZ-LIANG:AutoParallel/newexe-dp
Nov 29, 2022
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
JZ-LIANG
changed the title
[Auto Parallel-Optimization] Adapt Data Parallel for Graph executor
[Auto Parallel-Optimization] Optimizing data parallel Fuse-Allreduce-Overlapping when uses Graph executor
Nov 28, 2022
JZ-LIANG
changed the title
[Auto Parallel-Optimization] Optimizing data parallel Fuse-Allreduce-Overlapping when uses Graph executor
[Auto Parallel Optimization] Optimizing data parallel Fuse-Allreduce-Overlapping
Nov 28, 2022
JZ-LIANG
changed the title
[Auto Parallel Optimization] Optimizing data parallel Fuse-Allreduce-Overlapping
[Auto Parallel Perf] Optimizing data parallel Fuse-Allreduce-Overlapping
Nov 28, 2022
JZ-LIANG
changed the title
[Auto Parallel Perf] Optimizing data parallel Fuse-Allreduce-Overlapping
[Auto Parallel Performance] Optimizing data parallel Fuse-Allreduce-Overlapping
Nov 28, 2022
aoyulong
approved these changes
Nov 29, 2022
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
JZ-LIANG
changed the title
[Auto Parallel Performance] Optimizing data parallel Fuse-Allreduce-Overlapping
Optimizing data parallel Fuse-Allreduce-Overlapping
Jun 25, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
PR types
Performance optimization
PR changes
Others
Describe
Update1: Update the synchronization in DP-Overlapping from stream-synchronization to event-wait-record, which might reduce the synchronization overhead in scheduling.
Update2: Improve the after-allreduce-sync to allow a better fully Overlapping.
There two synchronization need by DP-Overlapping:
BUT when combining Overlapping and Fusing, we lose the dependencies of allreduced gradients in the data flow graph since the Coalescence of gradients fusing.
The Common solution is to conduct the after-allreduce-sync right after fuse-allreduce (like before this PR and in Paddle Parallel Executor). But this way would lead to insufficiency of overlapping:
The CPU timeline: there is a "wait" right after nccl-sumarray
The GPU timeline: insufficiency of overlapping, the allreduce cloud only overlap with SumArray computation, but not with LayerNorm Backward.
In this PR, we resolute the exactly data flow dependency after Fusing, and instead performming the after-allreduce-sync immediately after allreduce, we put that wait where it should be (and as late as possible), which favor a better fully Overlapping.
The CPU timeline: the "right after" after-allreduce-sync is move to where it should be
The GPU timeline: better fully overlapping of allreduce and the later LayerNorm Backward kernel.
to reproduce the performance you need another two PR:
support exe ctx in Comm op 48308
disable redundant dependency and prior comm op in standalone exe 48454