Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add vector support to System.Numerics.Tensors.TensorPrimitives.LeadingZeroCount for Byte and Int16 #110333
Add vector support to System.Numerics.Tensors.TensorPrimitives.LeadingZeroCount for Byte and Int16 #110333
Changes from all commits
52ad1ba
b83927c
548c37f
11acdd7
6c163a2
d7fe894
97f327e
0337265
5fd04f1
f1de6d5
4ad37cc
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this cheaper than:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is overhead when widening-unwidening.
For this case, the widening here gives a bimodal performance result. To verify, the same microbenchmark can be modified to stress this path specifically by using
BufferLength=16
.Some runs look like this:
Other runs look like this:
I chose this version because it was more consistent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this cheaper than:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Widening-unwidening has slightly worse performance here:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment (and the related ones below) are still pending a response.
The current pattern used looks significantly more expensive (in size, instruction count, and micro-ops) than the more naive method of
widen + lzcnt + narrow + subtract
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that it does look more expensive because of the increased instruction count, however the benchmark shows that the current pattern is more performant because it avoids the overhead of widening+narrowing.
Another example, this one using
BufferSize=3079
like in the original benchmark:And looking at the codegen for this case, the current PR actually generates a smaller function than the widen+narrow suggestion.
This function is where the hot loop of the benchmark is. Current PR generates a function with 1915 bytes of code, Widen+Narrow generates a function with 2356 bytes of code:
Codegen - Current PR
Codegen - Widen+narrow
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One more thing to highlight in the above codegen is that the current PR produces 9 instructions for each invocation of LeadingZeroCount. Using the widen+narrow approach produces 11 instructions.
Original loop is unrolled, I'm copying/pasting the codegen of just one invocation of LeadingZeroCount:
Current PR -- 9 total instructions
Widen+Narrow -- 11 total instructions
I'm still proposing the current PR as it produces faster and smaller codegen.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar question as previous, but doing WidenLower/WidenUpper since its 1024 bits total.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar comment as the
Vector128<byte>
case.There is overhead when widening-unwidening. It isn't as bad here, but both versions perform very similarly. Can be verified with
BufferLength=32
to stress this path:There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar question as previous, widening to Vector512
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Widening-unwidening has similar performance in this case. Can be verified with
BufferLength=16
:There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is the only one that shouldn't be simply Widen+Lzcnt. But it does warrant a comment elaborating on how the lookup works.
In particular,
PermuteVar64x8x2
isn't immediately obvious how it operates, so elaborating thatx
is being used as an index where bit 6 selects the table, bits 5:0 select an index in the table, and anything where bit 7 is set is zeroed is goodness.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense. I've added a comment to better explain how
x
is being used as an index and how the intrinsic is choosing between the two lookup vectors.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same question, doing WidenLower/WidenUpper
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The widening-unwidening performance difference is most obvious for this case.