Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[phi] move shape op #40248

Merged
merged 5 commits into from
Mar 10, 2022
Merged

[phi] move shape op #40248

merged 5 commits into from
Mar 10, 2022

Conversation

Liu-xiandong
Copy link
Member

PR types

New features

PR changes

OPs

Describe

[phi] move shape op

@paddle-bot-old
Copy link

paddle-bot-old bot commented Mar 8, 2022

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

paddle/phi/kernels/gpu/shape_kernel.cu Outdated Show resolved Hide resolved
paddle/phi/kernels/selected_rows/shape_kernel.cc Outdated Show resolved Hide resolved
paddle/phi/kernels/selected_rows/shape_kernel.h Outdated Show resolved Hide resolved
Copy link
Contributor

@chenwhql chenwhql left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

后续建议追加再完善下

void ShapeKernel(const Context& ctx,
const SelectedRows& input,
DenseTensor* out) {
auto in_var = input;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

日升的建议是合理的,这段代码改成ShapeKernel<T, Context >(dev_ctx, input.value(), out);,是不是逻辑一样?


namespace phi {

template <typename T, typename Context>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个是不是可以向reshape一样直接放到kernels目录下呢,shape在每个设备上的实现都是一样的

@xingfeng01 xingfeng01 merged commit 575dea8 into PaddlePaddle:develop Mar 10, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants