-
Notifications
You must be signed in to change notification settings - Fork 881
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use data inheritance from VirtualTupleTableSlot in compressed batch #6615
Conversation
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #6615 +/- ##
==========================================
+ Coverage 80.06% 81.47% +1.41%
==========================================
Files 190 191 +1
Lines 37181 36418 -763
Branches 9450 9464 +14
==========================================
- Hits 29770 29673 -97
+ Misses 2997 2979 -18
+ Partials 4414 3766 -648 ☔ View full report in Codecov by Sentry. |
@mahipv, @jnidzwetzki: please review this pull request.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall it looks good. Could you add some comments to the used functions (e.g., compressed_batch_lazy_init
, compressed_batch_current_tuple
) before merging?
In addition, it might be good to add a comment and document how the base
pointer of the VirtualTupleTableSlot is used to make this understandable without knowing this PR.
batch_state->compressed_slot = | ||
MakeSingleTupleTableSlot(dcontext->compressed_slot_tdesc, compressed_slot->tts_ops); | ||
|
||
/* Get a reference the the output TupleTableSlot */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo "the the"
@@ -700,7 +750,7 @@ compressed_batch_set_compressed_tuple(DecompressContext *dcontext, | |||
* columns. This can be improved by only decompressing the columns | |||
* needed for sorting. | |||
*/ | |||
batch_state->next_batch_row = batch_state->total_batch_rows; | |||
compressed_batch_discard_tuples(batch_state); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does this mean that earlier there was a memory leak here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think no, the memory context and tuple slots would be reset when the next compressed tuple arrived.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
This simplifies passing the columnar data out of the DecompressChunk to Vectorized Aggregation node which we plan to implement. Also this should improve memory locality and bring us closer to the architecture used in TAM for ArrowTupleSlot.
This simplifies passing the columnar data out of the DecompressChunk to Vectorized Aggregation node which we plan to implement. Also this should improve memory locality and bring us closer to the architecture used in TAM for ArrowTupleSlot.
Disable-check: force-changelog-file