-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf(shuffles): Incrementally retrieve metadata in reduce #3545
base: main
Are you sure you want to change the base?
Conversation
CodSpeed Performance ReportMerging #3545 will degrade performances by 38.37%Comparing Summary
Benchmarks breakdown
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #3545 +/- ##
==========================================
+ Coverage 77.54% 77.70% +0.15%
==========================================
Files 709 710 +1
Lines 86286 86909 +623
==========================================
+ Hits 66912 67531 +619
- Misses 19374 19378 +4
|
Interesting. One suggestion I have is to maybe see if we can use I wonder if this might simplify the logic here, and it might also fix some interesting OOM issues that I was observing wrt the workers that are holding the metadata objectrefs dying due to OOM. Could we try that and see if it gives us the same effect? We have to figure out where is appropriate to call this on the metadata objectrefs though. I would look into |
Another idea... Looks like these metadatas are being retrieved using a Ray remote function:
My guess is that if the I wonder if there's a better way of dealing with this. Edit: i think this might only be triggered from certain codepaths, and in most cases the metadata is returned as an objectref after execution. This code is so messy... |
Doing some experiments in #3557 but it seems like Doing another round of tests now using an explicit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this PR looks good to me, but I wonder if we should take a more holistic approach here and maybe just fix how metadata is being passed around the system rather than patch it here specifically for the map/reduce shuffle.
Maybe something really dumb like hooking into a stats/metadata actor? Would love to get thoughts from @samster25
Incrementally retrieve partition and metadata as fanouts are completed, instead of retrieving only after all are completed. This drastically speeds up large partition shuffles.
Before, we would see pauses between the map and reduce phase. (example below is a 1000x 1000x 100mb shuffle)
Now: