-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move compare OPs to phi #39970
Move compare OPs to phi #39970
Conversation
Thanks for your contribution! |
ctx.template Alloc<bool>(out); | ||
std::vector<const DenseTensor*> ins{&x, &y}; | ||
std::vector<DenseTensor*> outs{out}; | ||
paddle::operators::LaunchElementwiseCudaKernel<ElementwiseType::kBinary, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
代码已经迁到phi下,使用funcs下的BroadcastKernel替代
|
||
std::vector<const DenseTensor*> ins{&x, &y}; | ||
std::vector<DenseTensor*> outs{&tmp}; | ||
paddle::operators::LaunchSameDimsElementwiseCudaKernel<bool>( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
代码已经迁到phi下,使用ElementwiseKernel替代
#include "paddle/fluid/operators/elementwise/elementwise_op_broadcast.cu.h" | ||
#include "paddle/fluid/operators/elementwise/elementwise_op_impl.cu.h" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
可以替换成phi头文件
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
|
||
#include "paddle/phi/kernels/compare_kernel.h" | ||
|
||
#include "paddle/phi/core/dense_tensor.h" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dense_tensor.h应该不需要include了
KernelSignature LessThanArgumentMapping(const ArgumentMappingContext& ctx) { | ||
return KernelSignature("less_than", {"X", "Y"}, {"axis"}, {"Out"}); | ||
} | ||
|
||
KernelSignature LessEqualArgumentMapping(const ArgumentMappingContext& ctx) { | ||
return KernelSignature("less_equal", {"X", "Y"}, {"axis"}, {"Out"}); | ||
} | ||
|
||
KernelSignature GreaterThanArgumentMapping(const ArgumentMappingContext& ctx) { | ||
return KernelSignature("greater_than", {"X", "Y"}, {"axis"}, {"Out"}); | ||
} | ||
|
||
KernelSignature GreaterEqualArgumentMapping(const ArgumentMappingContext& ctx) { | ||
return KernelSignature("greater_equal", {"X", "Y"}, {"axis"}, {"Out"}); | ||
} | ||
|
||
KernelSignature EqualArgumentMapping(const ArgumentMappingContext& ctx) { | ||
return KernelSignature("equal", {"X", "Y"}, {"axis"}, {"Out"}); | ||
} | ||
|
||
KernelSignature NotEqualArgumentMapping(const ArgumentMappingContext& ctx) { | ||
return KernelSignature("not_equal", {"X", "Y"}, {"axis"}, {"Out"}); | ||
} | ||
|
||
KernelSignature EqualAllArgumentMapping(const ArgumentMappingContext& ctx) { | ||
return KernelSignature("equal_all", {"X", "Y"}, {}, {"Out"}); | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这些参数映射函数可以不写直接使用默认的吗?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
… move-compare-op-to-phi
… move-compare-op-to-phi
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR types
Function optimization
PR changes
OPs
Describe
Move a series of compare OPs to phi, including less_than, less_equal, greater_than, greater_equal, equal, not_equal, and equal_all.