Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transfer MultiHeadAttention's matmul to v2 op #36222

Merged
merged 4 commits into from
Dec 10, 2021

Conversation

FrostML
Copy link
Contributor

@FrostML FrostML commented Sep 29, 2021

PR types

Others

PR changes

APIs

Describe

Transfer MultiHeadAttention to v2 op.

  • matmul -> matmul_v2.

More performance information can be found according to QA report.

测试结论:

  • 静态图、动态图均无无 5% 以上性能异常下降情况

  • 动态图下: 8 卡性能下降最大的 case 为: transformer big bs4096 amp fp16: -3.55%

  • 静态图下: 8 卡性能下降最大的 case 为: bert large seqlen512 fp32 bs10: -2.82%

@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot-old
Copy link

Sorry to inform you that 3a4c032's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually.

Copy link
Contributor

@jeff41404 jeff41404 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

Copy link
Contributor

@XiaoguangHu01 XiaoguangHu01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@raindrops2sea
Copy link
Collaborator

Please update the description.

@jeff41404 jeff41404 merged commit 6549405 into PaddlePaddle:develop Dec 10, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants