Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace matmul with matmul_v2 during oneDNN fuse passes #49108

Merged
merged 6 commits into from
Jan 3, 2023

Conversation

Silv3S
Copy link
Member

@Silv3S Silv3S commented Dec 15, 2022

PR types

Others

PR changes

OPs

Describe

During PHI migration matmul_v2 kernel was migrated to PHI and matmul was left in common matmul_v2_mkldnn_op.cc file which could lead to many misunderstandings in a future. Also we have maintained support for both operators in each fuse pass, although they share same oneDNN kernel.

After merging this PR it will be easier to introduce fused_matmul and remove extra attributes from base operator.

Changes made in this PR:

  • replace matmul with matmul(v2) during fuse passes,
  • remove fusion logic from matmul operator and kernel,
  • adjust namespaces and naming,
  • adjust / remove unit tests which are no longer applicable.

@paddle-bot
Copy link

paddle-bot bot commented Dec 15, 2022

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@Silv3S Silv3S added the Intel label Dec 15, 2022
@paddle-bot-old paddle-bot-old bot added the contributor External developers label Dec 15, 2022
Copy link
Contributor

@jczaja jczaja left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Silv3S Silv3S requested a review from zyfncg December 20, 2022 12:53
Copy link
Contributor

@sfraczek sfraczek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@zyfncg zyfncg requested a review from YuanRisheng December 21, 2022 02:13
Comment on lines +86 to +88
if (matmul_alpha != 1.0f) {
matmul_op->SetAttr("alpha", matmul_alpha);
}
Copy link
Contributor

@zyfncg zyfncg Dec 21, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alpha is not a attribute of matmul_v2, I'm not sure whether this alpha can work in kernel of matmul_v2, please do a check ~

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it works because alpha is added as extra OneDNN-specific attribute in ops_extra_info.h. I'm aware that it's not elegant solution, so in next PR I will introduce fused_matmul with all necessary attributes declared in OPMaker.

Copy link
Contributor

@yeliang2258 yeliang2258 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@zhangting2020 zhangting2020 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for skip_check_grad_ci

@Silv3S
Copy link
Member Author

Silv3S commented Jan 3, 2023

@jczaja please merge

@jczaja jczaja merged commit 2c444df into PaddlePaddle:develop Jan 3, 2023
@Silv3S Silv3S deleted the clean_matmuls branch January 3, 2023 10:36
@Silv3S Silv3S restored the clean_matmuls branch January 3, 2023 20:35
Silv3S added a commit to Silv3S/Paddle that referenced this pull request Jan 3, 2023
chalsliu pushed a commit that referenced this pull request Jan 4, 2023
@Silv3S Silv3S deleted the clean_matmuls branch January 4, 2023 12:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributor External developers Intel
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants