-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix instance_norm、conv2d_xpu、inplace optimizer bugs. #52627
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
@@ -186,6 +186,5 @@ PD_REGISTER_KERNEL(assign_value, | |||
int, | |||
float, | |||
double, | |||
int64_t, | |||
phi::dtype::float16) {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assign kernel 为什么不支持 fp16?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
本身assign_value 这个 kernel 就没有为 fp16 设计,kernel 类型实现和属性绑定的,例如 fp32的 kernel 需要有一个 fp32_values 的属性,int64类型的 kernel 需要有一个 int64_values 的属性,但是 fp16类型没有fp16_values的属性,不适合新增fp16类型注册。
saved_mean->data<float>(), | ||
saved_var->data<float>(), | ||
true); | ||
DenseTensor scale_data; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
float* scale_data_ptr = nullptr;
DenseTensor one_scale; // or dummy_scale
if (!scale_ptr) {
one_scale.Resize({c});
// xdnn interface expects float data
dev_ctx.template Alloc<float>(&one_scale);
phi::funcs::set_constant(dev_ctx, &one_scale, static_cast<float>(1));
scale_data_ptr = one_scale.data<float>();
} else {
scale_data_ptr = scale_ptr->data<float>();
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修改
h, | ||
w, | ||
epsilon, | ||
scale_ptr == nullptr ? scale_data.data<float>() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
scale_data_ptr
epsilon, | ||
scale_ptr == nullptr ? scale_data.data<float>() | ||
: scale_ptr->data<float>(), | ||
bias_ptr == nullptr ? bias_data.data<float>() : bias_ptr->data<float>(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bias_data_ptr
3823f84
to
d496241
Compare
const float* bias_data_fp32 = nullptr; | ||
const auto* bias_ptr = bias.get_ptr(); | ||
if (bias_ptr == nullptr) { | ||
DenseTensor bias_data; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
能定义在{}内部吗?bias_data_fp32 会不会变成野指针?
scale_data_fp32 = scale_data.data<float>(); | ||
} else if (scale_ptr->dtype() == | ||
phi::CppTypeToDataType<phi::dtype::float16>::Type()) { | ||
float* scale_data_temp = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
scale_data_temp 这个内存什么时候被释放?RAII_GUARD 析构的时候?
dev_ctx.template Alloc<float>(&scale_data); | ||
phi::funcs::set_constant(dev_ctx, &scale_data, static_cast<float>(1)); | ||
scale_data_fp32 = scale_data.data<float>(); | ||
} else if (scale_ptr->dtype() == |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
条件是不是应该是:
scale_ptr->dtype() != phi::CppTypeToDataType<phi::dtype::float>::Type()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
d496241
to
e3962e2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR types
Bug fixes
PR changes
OPs
Describe
修复 instance_norm 算子在 scale == null、bias == null 时出现 segment fault 的错误。
修复 inplace 优化算子的前一个算子是 feed 算子时, model input variable shape 被修改,导致第二次推理失败的问题。
修复 conv2d_xpu 算子 has_bias 错误的问题。
取消了 assign_value 算子在 xpu 后端 fp16类型的注册。