-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[auto parallel] complete elementwise ops spmd rules of LLaMa2 for eager semi auto parallel #58474
Conversation
32e2513
to
0e6e506
Compare
bf27068
to
8f37184
Compare
fa0f150
to
9382027
Compare
* fused_linear_param_grad_add_pass * modify * skip cpu ci * add muti_presion modify * modify * modify * modify * modify
9382027
to
1d8a01e
Compare
@@ -30,6 +30,10 @@ SpmdInfo ElementwiseUnaryInferSpmdReverse(const DistMetaTensor& x, | |||
SpmdInfo ElementwiseUnaryGradInferSpmd(const DistMetaTensor& x, | |||
const DistMetaTensor& out_grad); | |||
|
|||
SpmdInfo ElementwiseUnaryGradInferSpmd(const DistMetaTensor& x, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
缺少单规则模块的单测,可以参考 test/auto_parallel/spmd_rules/test_elementwise_rule.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
新提一个PR补充
@@ -42,5 +46,11 @@ SpmdInfo ElementwiseBinaryGradInferSpmd(const DistMetaTensor& x, | |||
const DistMetaTensor& out_grad, | |||
int64_t axis = -1); | |||
|
|||
SpmdInfo ElementwiseBinaryGradInferSpmd(const DistMetaTensor& x, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
新提交一个PR补充
@@ -30,6 +30,10 @@ SpmdInfo ElementwiseUnaryInferSpmdReverse(const DistMetaTensor& x, | |||
SpmdInfo ElementwiseUnaryGradInferSpmd(const DistMetaTensor& x, | |||
const DistMetaTensor& out_grad); | |||
|
|||
SpmdInfo ElementwiseUnaryGradInferSpmd(const DistMetaTensor& x, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
重载是为了适配 api 么?是不是可以加些注释说明一下
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
适配不同API签名,比如一元逐元素运算,部分算子反向只依赖输入,部分依赖输入输出,sin_grad(const Tensor& x, const Tensor& out_grad, Tensor* x_grad), silu_grad(const Tensor& x, const Tensor& out, const Tensor& out_grad, Tensor* x_grad)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for yaml file
你的PR提交成功,感谢你对开源项目的贡献! |
…er semi auto parallel (PaddlePaddle#58474) * add elementwise spmd rules for eager semi auto parallel * support bool dtype for s_to_r_reshared * increase test_semi_auto_parallel_basic timeout for pass ci * prompt rtol tolerance from 1e-5 to 1e-6 --------- Co-authored-by: xiaoguoguo626807 <100397923+xiaoguoguo626807@users.noreply.github.com>
…er semi auto parallel (PaddlePaddle#58474) * add elementwise spmd rules for eager semi auto parallel * support bool dtype for s_to_r_reshared * increase test_semi_auto_parallel_basic timeout for pass ci * prompt rtol tolerance from 1e-5 to 1e-6 --------- Co-authored-by: xiaoguoguo626807 <100397923+xiaoguoguo626807@users.noreply.github.com>
PR types
New features
PR changes
Others
Description
Pcard-73145
Support bool dtype for sharding
Add yaml config and unittest for follow ops, which are used in LLaMa2 model.