-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added flatten and flatten2 BF16/FP32 FWD/BWD kernels #35892
Conversation
Thanks for your contribution! |
@tsocha Please review this PR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
FYI: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- If you commented the mkldnn kernels calling part, how does mkldnn kernels code coverage passed? It looks now the mkldnn UT will always only call native kernels, is it?
- When will be memory descriptor being added into Tensor? What is the cause to add mem descriptor into Tensor?
Yes, for now only native kernels are called in mkldnn UTs. I will be adding md into tensor at the beginning of Q4, so it is only a temporary solution. We need to add memory descriptor into tensor class, because it is impossible to support all kind of data layouts through the memory format tags. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR types
New features
PR changes
OPs
Describe
Added(currently disabled) flatten and flatten2 BF16/FP32 FWD/BWD kernels