-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix mixed precision output type to original type #11142
Conversation
Hi, does this mean the output OP has different dtype for inputs, weights and dst after AMP? If so, I'm afraid that my bfloat16 support for DNNL BYOC(#11111) might conflict with this. I implement bfloat16 support with the assumption that the dtype for the inputs, weights and dst are either all float32 or all bfloat16. |
I think this PR is for the model output dtype only so it shouldn't affect other cases. In addition to that, I'd suggest 1) exposing this behavior to users by adding a configure to PassContext, and 2) adding a unit test. |
OK, thanks for your clarification. |
Sorry ive been busy, ill take a look tomorrow |
@AndrewZhaoLuo @comaniac, Modified this behavior to users by adding a configure to PassContext, and adding a unit test. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Just nits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks! Please fix CI though
Looks like CI is flaky, please push empty commit to restart CI. @gayatripk1 |
Done |
waiting for more reviews? |
Thanks @gayatripk1 @AndrewZhaoLuo |
Thanks for contributing to TVM! Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from Reviewers by @ them in the pull request thread.