Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Prim][PIR] Sink layer_norm forward prim rule #58679

Merged
merged 14 commits into from
Nov 15, 2023

Conversation

cyber-pioneer
Copy link
Contributor

@cyber-pioneer cyber-pioneer commented Nov 3, 2023

PR types

Others

PR changes

Others

Description

Pcard-66975
Sink layer_norm forward prim rule

Copy link

paddle-bot bot commented Nov 3, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

Copy link

paddle-bot bot commented Nov 3, 2023

❌ The PR is not created using PR's template. You can refer to this Demo.
Please use PR's template, it helps save our maintainers' time so that more developers get helped.

@cyber-pioneer cyber-pioneer changed the title Move ln [Prim][PIR] Sink layer_norm forward prim rule Nov 13, 2023
Charles-hit
Charles-hit previously approved these changes Nov 13, 2023


# xshape output will no longer used after decomp, but return none to keep output num the same as origin op
decomp_output_unused_op = ["squeeze", "unsqueeze"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这儿是什么意思

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

有算子拆解后,部分intermediate 输出将不再需要,组合机制里不再生成这些输出。

@@ -503,6 +503,8 @@ def _get_batch_norm_none_var(op):
"unsqueeze2": ["XShape"],
}

pir_ops_contain_none = ["pd_op.squeeze", "pd_op.unsqueeze"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这儿最好加一下注释表示中间输出拆解后可能部分算子会变为none

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

@@ -503,6 +503,9 @@ def _get_batch_norm_none_var(op):
"unsqueeze2": ["XShape"],
}

# some intermediate outputs like xshape will no longer used after decomp, but return none to keep output num the same as origin op
decomp_ops_list_contain_unused_output = ["pd_op.squeeze", "pd_op.unsqueeze"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这儿后续可以放到decomposition目录下

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

后续会自动生成,放在这个目录下面

@cyber-pioneer cyber-pioneer merged commit 4b8e702 into PaddlePaddle:develop Nov 15, 2023
SecretXV pushed a commit to SecretXV/Paddle that referenced this pull request Nov 28, 2023
* pir sink decomp support symbol overload

* fix code

* move layer_norm

* support layer_norm op

* fix code

* add type cast

* remove unused code

* fix code

* fix code
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants