Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support auto generate for prelu #51913

Merged
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
135 changes: 0 additions & 135 deletions paddle/fluid/operators/prelu_op.cc

This file was deleted.

10 changes: 10 additions & 0 deletions paddle/phi/api/yaml/backward.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1083,6 +1083,16 @@
func : pow_triple_grad
data_type : x

- backward_op : prelu_grad
forward : prelu(Tensor x, Tensor alpha, str data_format="NCHW", str mode="all") -> Tensor(out)
args : (Tensor x, Tensor alpha, Tensor out_grad, str data_format, str mode)
output : Tensor(x_grad), Tensor(alpha_grad)
infer_meta :
func : GeneralBinaryGradInferMeta
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mkldnn失败的原因已确定,这里的不能直接使用GeneralBinaryGradInferMeta,需要新增函数。建议如下:

  1. 这里修改为PreluGradInferMeta
  2. paddle/phi/infermeta/backward.cc文件的PixelUnshuffleGradInferMeta函数后面增加函数
void PreluGradInferMeta(const MetaTensor& x,
                        const MetaTensor& y,
                        MetaTensor* dx,
                        MetaTensor* dy) {
  if (dx) {
    dx->share_dims(x);
  }
  if (dy) {
    dy->share_dims(y);
  }
}
  1. 在头文件paddle/phi/infermeta/backward.hPixelUnshuffleGradInferMeta函数后面增加函数声明
void PreluGradInferMeta(const MetaTensor& x,
                        const MetaTensor& y,
                        MetaTensor* dx,
                        MetaTensor* dy);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2. PixelUnshuffleGradInferMeta

收到,已添加

param: [x, alpha]
kernel :
func : prelu_grad
Ainavo marked this conversation as resolved.
Show resolved Hide resolved

- backward_op : put_along_axis_grad
forward : put_along_axis (Tensor arr, Tensor indices, Tensor value, int axis, str reduce = "assign") -> Tensor(out)
args : (Tensor arr, Tensor indices, Tensor out_grad, int axis, str reduce)
Expand Down
10 changes: 0 additions & 10 deletions paddle/phi/api/yaml/legacy_backward.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -950,16 +950,6 @@
func : pool3d_grad
param : [x, out, out_grad, kernel_size, strides, paddings, ceil_mode, exclusive, data_format, pooling_type, global_pooling, adaptive, padding_algorithm]

- backward_op : prelu_grad
forward : prelu(Tensor x, Tensor alpha, str data_format, str mode) -> Tensor(out)
args : (Tensor x, Tensor alpha, Tensor out_grad, str data_format, str mode)
output : Tensor(x_grad), Tensor(alpha_grad)
infer_meta :
func : GeneralBinaryGradInferMeta
param: [x, alpha]
kernel :
func : prelu_grad

- backward_op : prod_grad
forward : prod (Tensor x, IntArray dims, bool keep_dim, bool reduce_all) -> Tensor(out)
args : (Tensor x, Tensor out, Tensor out_grad, IntArray dims, bool keep_dim, bool reduce_all)
Expand Down
9 changes: 0 additions & 9 deletions paddle/phi/api/yaml/legacy_ops.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1292,15 +1292,6 @@
param : [x, kernel_size, strides, paddings, ceil_mode, exclusive, data_format, pooling_type, global_pooling, adaptive, padding_algorithm]
backward : pool3d_grad

- op : prelu
args : (Tensor x, Tensor alpha, str data_format, str mode)
output : Tensor(out)
infer_meta :
func : PReluInferMeta
kernel :
func : prelu
backward : prelu_grad

- op : prior_box
args : (Tensor input, Tensor image, float[] min_sizes, float[] aspect_ratios, float[] variances, float[] max_sizes = {}, bool flip=true, bool clip=true, float step_w=0.0, float step_h=0.0, float offset=0.5, bool min_max_aspect_ratios_order=false)
output : Tensor(out), Tensor(var)
Expand Down
6 changes: 6 additions & 0 deletions paddle/phi/api/yaml/op_compat.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1338,6 +1338,12 @@

- op : prelu
backward : prelu_grad
inputs :
{ x : X, alpha : Alpha}
outputs :
out : Out
attrs :
{ data_format : data_format, mode : mode}
Ainavo marked this conversation as resolved.
Show resolved Hide resolved
extra :
attrs : [bool use_mkldnn = false, str mkldnn_data_type = "float32", bool is_test = false]

Expand Down
9 changes: 9 additions & 0 deletions paddle/phi/api/yaml/ops.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1089,6 +1089,15 @@
data_type : x
backward : pow_grad

- op : prelu
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

查看ci-coverage, 这个错误可能是由test_dropout_nd_op.py引入的,目前已经屏蔽了这个单测,可以再rerun一下相关CI

Copy link
Contributor Author

@Ainavo Ainavo Mar 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

查看ci-coverage, 这个错误可能是由test_dropout_nd_op.py引入的,目前已经屏蔽了这个单测,可以再rerun一下相关CI

您好,之前 ci-coverage 的报错
我直接重新 re-run 也 ci-coverage 会有同样报错,然后在本地 develop 分支使用 git pull 拉取了最新的代码,并在本地该分支使用 git merge develop 命令,自动触发 ci, 得到同样报错

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

查看ci-coverage, 这个错误可能是由test_dropout_nd_op.py引入的,目前已经屏蔽了这个单测,可以再rerun一下相关CI

您好,之前 ci-coverage 的报错 我直接重新 re-run 也 ci-coverage 会有同样报错,然后在本地 develop 分支使用 git pull 拉取了最新的代码,并在本地该分支使用 git merge develop 命令,自动触发 ci, 得到同样报错

随后我会在本地调试协助解决bug,你可以继续认领其他任务

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

查看ci-coverage, 这个错误可能是由test_dropout_nd_op.py引入的,目前已经屏蔽了这个单测,可以再rerun一下相关CI

您好,之前 ci-coverage 的报错 我直接重新 re-run 也 ci-coverage 会有同样报错,然后在本地 develop 分支使用 git pull 拉取了最新的代码,并在本地该分支使用 git merge develop 命令,自动触发 ci, 得到同样报错

随后我会在本地调试协助解决bug,你可以继续认领其他任务

好的

args : (Tensor x, Tensor alpha, str data_format="NCHW", str mode="all")
output : Tensor(out)
infer_meta :
func : PReluInferMeta
kernel :
func : prelu
Ainavo marked this conversation as resolved.
Show resolved Hide resolved
backward : prelu_grad

- op : put_along_axis
args : (Tensor arr, Tensor indices, Tensor values, int axis, str reduce = "assign")
output : Tensor(out)
Expand Down
34 changes: 0 additions & 34 deletions paddle/phi/ops/compat/prelu_sig.cc

This file was deleted.