Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[oneDNN] disable caching oneDNN primitives in matmul v2, Reduce grad and elementwise_add grad, expand_v2 #35132

Merged
merged 9 commits into from
Aug 26, 2021

Conversation

jczaja
Copy link
Contributor

@jczaja jczaja commented Aug 24, 2021

PR types

Others

PR changes

OPs

Describe

This PR continues disabling PaddlePaddle caching of oneDNN objects in a favour of having oneDNN caching its own objects.

@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@jczaja jczaja added the Intel label Aug 24, 2021
@jakpiase jakpiase self-requested a review August 24, 2021 14:07
@jakpiase
Copy link
Contributor

@jczaja Could you please update PR title and include other kernels that will have disabled caching after this PR?

@jczaja jczaja changed the title [oneDNN] disable caching oneDNN primitives in matmul v2 [oneDNN] disable caching oneDNN primitives in matmul v2, Reduce grad and elementwise_add grad, expand_v2 Aug 26, 2021
Copy link
Contributor

@jakpiase jakpiase left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@jczaja jczaja requested a review from lidanqing-intel August 26, 2021 15:10
Copy link
Contributor

@lidanqing-intel lidanqing-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@jczaja jczaja merged commit 31f0221 into PaddlePaddle:develop Aug 26, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants