-
Notifications
You must be signed in to change notification settings - Fork 155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: Reserve memory for native shuffle writer per partition #988
Conversation
18be135
to
d063f15
Compare
I've copied the tests on my branch to this PR and the test hangs:
It is possibly caused by deadlocking on |
Thanks. I knew the cause of the deadlocks. I'm going to revamp some codes. |
20b3711
to
d2c6102
Compare
64c7c0d
to
d25837a
Compare
d25837a
to
da8d679
Compare
718c15c
to
8172a7c
Compare
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #988 +/- ##
============================================
- Coverage 34.03% 33.97% -0.07%
+ Complexity 875 857 -18
============================================
Files 112 112
Lines 43289 43426 +137
Branches 9572 9622 +50
============================================
+ Hits 14734 14752 +18
- Misses 25521 25630 +109
- Partials 3034 3044 +10
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Hmm, these tests for large partition number shuffle fail on MacOS runners only. And no stack trace...But I cannot reproduce it locally. |
Okay, it is the error I expected before:
But I had increase it by |
5e50f98
to
ebf4663
Compare
ebf4663
to
e121814
Compare
|
||
#[test] | ||
#[cfg_attr(miri, ignore)] // miri can't call foreign function `ZSTD_createCCtx` | ||
#[cfg(not(target_os = "macos"))] // Github MacOS runner fails with "Too many open files". |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These tests fail on MacOS runners with "Too many open files" error. ulimit
cannot help too.
I skip them on MacOS runners. We have ubuntu runners to test them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test shuffle_write_test(10000, 10, 200, Some(10 * 1024 * 1024))
spilled 1700 times, it spills too frequently for data of this size. Seems that the excessive spilling problem is inevitable if we reserve full batch capacity for the arrow builder.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR seems like an important improvement because it now uses the memory pool features. Perhaps we can follow up with optimizations to reduce spilling. wdyt @Kontinuation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. Let's merge this.
I'm also considering adding a native sort-based shuffle writer that works better with constraint resources.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We've discussed to support sort-based shuffle in the native shuffle writer, similar to Spark shuffle, in the early development. So I think it is on our roadmap though it is not urgent at that moment.
I'm testing this PR out now, in conjunction with some other PRs because I currently have a reproducible deadlock caused by memory pool issues, as far as I can tell. |
cc9e531
to
6763b1e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @viirya
Thanks @andygrove @Kontinuation |
…apache#988)" This reverts commit e146cfa.
…apache#988)" This reverts commit e146cfa.
…apache#988)" This reverts commit e146cfa.
…artition (apache#988)"" This reverts commit 481127d.
…er per partition (apache#988)""" This reverts commit 9469d16.
…fle writer per partition (apache#988)"""" This reverts commit 6002726.
…r per partition (apache#988)" (apache#1020)" This reverts commit 8d097d5.
…HashJoin (#1007) * experiment * fix and add credit * disable by default and make internal * remove sort * minor optimization * minor optimization * remove unused import * disable feature by default * fix dockerfile * Add section to tuning guide * update benchmarking guide * Revert "chore: Reserve memory for native shuffle writer per partition (#988)" This reverts commit e146cfa. * mark feature as experimental and explain risks * workaround for TPC-DS q14 hanging on a RightSemi join * revert a change * remove debug logging: * format * add link to tuning guide
Which issue does this PR close?
Closes #887.
Rationale for this change
What changes are included in this PR?
How are these changes tested?