Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

executor: trace the memory usage of Projection executors #13914

Merged
merged 20 commits into from
Dec 16, 2019

Conversation

qw4990
Copy link
Contributor

@qw4990 qw4990 commented Dec 5, 2019

What problem does this PR solve?

Trace the memory usage of Projection executors and display the result in Explain Analyze.

Check List

Tests

  • Unit test

@qw4990 qw4990 added the sig/execution SIG execution label Dec 5, 2019
@qw4990
Copy link
Contributor Author

qw4990 commented Dec 5, 2019

/rebuild

@codecov
Copy link

codecov bot commented Dec 5, 2019

Codecov Report

Merging #13914 into master will not change coverage.
The diff coverage is n/a.

@@             Coverage Diff             @@
##             master     #13914   +/-   ##
===========================================
  Coverage   80.5325%   80.5325%           
===========================================
  Files           483        483           
  Lines        123288     123288           
===========================================
  Hits          99287      99287           
  Misses        16245      16245           
  Partials       7756       7756

@qw4990
Copy link
Contributor Author

qw4990 commented Dec 5, 2019

/run-all-tests

@qw4990
Copy link
Contributor Author

qw4990 commented Dec 5, 2019

/run-integration-common-test

Copy link
Contributor

@ichn-hu ichn-hu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you please add a test that shows after the execution the memTracker will be 0? Just like

c.Assert(response.memTracker.BytesConsumed(), Equals, int64(0))

Rest LGTM.

@qw4990
Copy link
Contributor Author

qw4990 commented Dec 9, 2019

Would you please add a test that shows after the execution the memTracker will be 0? Just like

c.Assert(response.memTracker.BytesConsumed(), Equals, int64(0))

Rest LGTM.

updated, PTAL~

@qw4990
Copy link
Contributor Author

qw4990 commented Dec 9, 2019

Would you please add a test that shows after the execution the memTracker will be 0? Just like

c.Assert(response.memTracker.BytesConsumed(), Equals, int64(0))

Rest LGTM.

updated, PTAL~

After testing, I find it's hard to guarantee it is zero since when the main goroutine exit, other worker-goroutines may be still running and chunks hold by them are hard to trace then.

@SunRunAway
Copy link
Contributor

Would you please add a test that shows after the execution the memTracker will be 0? Just like

c.Assert(response.memTracker.BytesConsumed(), Equals, int64(0))

Rest LGTM.

updated, PTAL~

After testing, I find it's hard to guarantee it is zero since when the main goroutine exit, other worker-goroutines may be still running and chunks hold by them are hard to trace then.

Could you explain it with more detail?
It's better to keep zero goroutines running after executor.Close is called.

@qw4990
Copy link
Contributor Author

qw4990 commented Dec 9, 2019

Would you please add a test that shows after the execution the memTracker will be 0? Just like

c.Assert(response.memTracker.BytesConsumed(), Equals, int64(0))

Rest LGTM.

updated, PTAL~

After testing, I find it's hard to guarantee it is zero since when the main goroutine exit, other worker-goroutines may be still running and chunks hold by them are hard to trace then.

Could you explain it with more detail?
It's better to keep zero goroutines running after executor.Close is called.

Projection uses some channels to transfer chunks between its internal goroutines.
When closing it, we should close these channels and drain them to trace all chunks in them, but there may be some other goroutines are still reading or writing these channels because they haven't received the exit signal yet, which causes DATA RACE on these channels (the main goroutine want to close them while others are reading/writing them).

We can use WaitGroup to sync the exit signal and let the main goroutine close these channels until all worker-goroutines exit. But is it worth that?

@qw4990
Copy link
Contributor Author

qw4990 commented Dec 10, 2019

/run-all-tests

@qw4990
Copy link
Contributor Author

qw4990 commented Dec 10, 2019

All comments are addressed, PTAL @SunRunAway @ichn-hu

ichn-hu
ichn-hu previously approved these changes Dec 10, 2019
Copy link
Contributor

@ichn-hu ichn-hu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

executor/explain_test.go Outdated Show resolved Hide resolved
@ichn-hu ichn-hu added the status/LGT1 Indicates that a PR has LGTM 1. label Dec 10, 2019
Copy link
Contributor

@SunRunAway SunRunAway left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@SunRunAway
Copy link
Contributor

/merge

@sre-bot sre-bot added the status/can-merge Indicates a PR has been approved by a committer. label Dec 13, 2019
@sre-bot
Copy link
Contributor

sre-bot commented Dec 13, 2019

/run-all-tests

@sre-bot
Copy link
Contributor

sre-bot commented Dec 13, 2019

@qw4990 merge failed.

@qw4990
Copy link
Contributor Author

qw4990 commented Dec 16, 2019

/run-all-tests

@qw4990 qw4990 added status/LGT2 Indicates that a PR has LGTM 2. and removed status/LGT1 Indicates that a PR has LGTM 1. labels Dec 16, 2019
@qw4990 qw4990 merged commit b8dad33 into pingcap:master Dec 16, 2019
XiaTianliang pushed a commit to XiaTianliang/tidb that referenced this pull request Dec 21, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sig/execution SIG execution status/can-merge Indicates a PR has been approved by a committer. status/LGT2 Indicates that a PR has LGTM 2.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants