-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix test_retinanet_detection_output #59813
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
op_item->num_results() == output_defs.size(), but received op_item->num_results():1 != output_defs.size():0.
这种bug可能有两个原因,这里是因为算子的kernel通过STRUCT kernel方式注册到phi算子体系中,导致phi kernel的解析有问题。解决方法是在paddle/fluid/pir/dialect/operator/utils/utils.cc
的LegacyOpList添加新定义的算子名。
在采取了上面的解决方案后,如果依然报相同错误,那就是另一个原因,选择了错误的kernel key,可以参考comment里提及的interfaces的配置,尝试解决。
@@ -109,6 +109,7 @@ test_elementwise_mul_op | |||
test_elementwise_pow_op | |||
test_erf_op | |||
test_erfinv_op | |||
test_etinanet_detection_output |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
test_etinanet_detection_output | |
test_retinanet_detection_output |
- op: retinanet_detection_output | ||
args: (Tensor[] bboxes, Tensor[] scores, Tensor[] anchors, Tensor iminfo, float score_threshold, int64_t nms_top_k, float nms_threshold, float nms_eta, int64_t keep_top_k) | ||
output: Tensor(out) | ||
infer_meta: | ||
func: RetinanetDetectionOutputInferMeta | ||
param: [bboxes, scores, anchors, iminfo] | ||
kernel: | ||
func: retinanet_detection_output | ||
optional: bboxes, scores, anchors |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- op: retinanet_detection_output | |
args: (Tensor[] bboxes, Tensor[] scores, Tensor[] anchors, Tensor iminfo, float score_threshold, int64_t nms_top_k, float nms_threshold, float nms_eta, int64_t keep_top_k) | |
output: Tensor(out) | |
infer_meta: | |
func: RetinanetDetectionOutputInferMeta | |
param: [bboxes, scores, anchors, iminfo] | |
kernel: | |
func: retinanet_detection_output | |
optional: bboxes, scores, anchors | |
- op: retinanet_detection_output | |
args: (Tensor[] bboxes, Tensor[] scores, Tensor[] anchors, Tensor iminfo, float score_threshold, int64_t nms_top_k, float nms_threshold, float nms_eta, int64_t keep_top_k) | |
output: Tensor(out) | |
infer_meta: | |
func: RetinanetDetectionOutputInferMeta | |
param: [bboxes, scores, anchors, iminfo] | |
kernel: | |
func: retinanet_detection_output | |
data_type : scores | |
interfaces: RetinanetDetectionOutputOpParseKernelKey |
这里有两个问题跟你确认下:
- 为什么要添加optional字段呢?我看retinanet_detection_output_op.cc里面对这些输入声明了AsDuplicable,但是并没有声明AsDispensable
- interfaces字段的添加,这个是为了迁移原算子体系下RetinanetDetectionOutputOp::GetExpectedKernelType引入的。除了在yaml配置字段,还需要在
paddle/fluid/pir/dialect/operator/interface/interface.cc
里定义一个RetinanetDetectionOutputOpParseKernelKey函数,可以参考 [PIR] Add parse kernel key interface #59124 定义的UniqueOpParseKernelKey。
const MetaTensor& iminfo, | ||
MetaTensor* out, | ||
MetaConfig config) { | ||
std::vector<decltype(bboxes[0]->dims())> bboxes_dims; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
需要先PADDLE_ENFORCE( bboxes.size() > 0),原infershape中的PADDLE_ENFORCE也需要迁移,判断输入输出是否存在的除外。
Sorry to inform you that 33fb40a's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually. |
PR types
Others
PR changes
Others
Description
PIR Op单测修复
修复单测 test_retinanet_detection_output 否
第一次,不太会,麻烦大佬看看~
@xingmingyyj @kangguangli
修复后打开FLAGS_enable_pir_in_executor单测是否通过:未测试
(多余的修改,是分支切错了,最后会复原)