Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Check 'invalid_axis == axes.cend()' failed at openvino-dev\src\core\src\validation_util.cpp:885 #17958

Closed
3 tasks done
xuewenyuan opened this issue Jun 9, 2023 · 3 comments
Assignees
Labels
category: PDPD FE OpenVINO PaddlePaddle FrontEnd support_request

Comments

@xuewenyuan
Copy link

System information (version)
  • OpenVINO Source=> GitHub enable deformable_detr for PDFE #17525
  • OpenVINO Version=> 2022.3
  • Operating System / Platform => Windows 64 Bit
  • Compiler => Visual Studio 2019 / Cmake
  • Problem classification => Model Conversion
  • Device use: => CPU
  • Framework => PaddlePaddle
  • Model name => Deformable DETR
Detailed description

An error occurs when I compile a customized PDPD model (Deformable DETR). How could I solve it?

Traceback (most recent call last):
  File "test_chartdetr.py", line 227, in <module>
    ov_model = core.compile_model(ov_model, device_name="CPU")
  File "path_to_openvino\python\python3.8\openvino\runtime\ie_api.py", line 398, in compile_model
    super().compile_model(model, device_name, {} if config is None else config),
RuntimeError: Check 'false' failed at path_to_openvino\src\inference\src\core.cpp:117:
Check 'invalid_axis == axes.cend()' failed at path_to_openvino\src\core\src\validation_util.cpp:885:
While validating node 'opset1::Squeeze Squeeze_33153 (TopK_33152[1]:i32[?,?,1], Constant_33151[0]:i32[]) -> (i64[?,?])' with friendly_name 'Squeeze_33153':
 Parameter axis 2147483647 out of the tensor rank range [-3, 2].
Steps to reproduce

The customized model has the same framework with DETR except detr_head and post_process.

@register
class CustomizedDETRHead(nn.Layer):
    __shared__ = ['num_classes', 'hidden_dim', 'points_per_group']
    __inject__ = ['loss']

    def __init__(self,
                 num_classes=80,
                 hidden_dim=512,
                 nhead=8,
                 num_mlp_layers=3,
                 loss='ChartDETRLoss',
                 points_per_group=14):
        super(ChartDeformableDETRHead, self).__init__()
        self.num_classes = num_classes
        self.hidden_dim = hidden_dim
        self.nhead = nhead
        self.loss = loss
        self.points_per_group = points_per_group

        self.score_head = nn.Linear(hidden_dim, self.num_classes)
        self.coord_head = MLP(hidden_dim,
                             hidden_dim,
                             output_dim=2,
                             num_layers=num_mlp_layers)

        self._reset_parameters()

    def _reset_parameters(self):
        linear_init_(self.score_head)
        constant_(self.score_head.bias, -4.595)
        constant_(self.coord_head.layers[-1].weight)

        with paddle.no_grad():
            bias = paddle.zeros_like(self.coord_head.layers[-1].bias)
            bias[2:] = -2.0
            self.coord_head.layers[-1].bias.set_value(bias)

    @classmethod
    def from_config(cls, cfg, hidden_dim, nhead, input_shape):
        return {'hidden_dim': hidden_dim, 'nhead': nhead}

    def forward(self, out_transformer, body_feats, inputs=None):
        r"""
        Args:
            out_transformer (Tuple): (feats: [num_levels, batch_size,
                                                num_queries, hidden_dim],
                            memory: [batch_size,
                                \sum_{l=0}^{L-1} H_l \cdot W_l, hidden_dim],
                            reference_points: [batch_size, num_queries, 2])
            body_feats (List(Tensor)): list[[B, C, H, W]]
            inputs (dict): dict(inputs)
        """
        feats, memory, reference_points = out_transformer
        reference_points = inverse_sigmoid(reference_points.unsqueeze(0))
        outputs_coord = self.coord_head(feats)

        outputs_coord = outputs_coord[:, :, :, :2] + reference_points

        outputs_coord = F.sigmoid(outputs_coord)

        lvls, b, q, c = feats.shape
        feats_reshape = feats.reshape((lvls, b, -1, self.points_per_group, c))
        outputs_logit = self.score_head(feats_reshape[:, :, :, 0, :])
        outputs_coord = outputs_coord.reshape((lvls, b, -1, self.points_per_group, 2)).flatten(3)

        if self.training:
            assert inputs is not None
            assert 'gt_shapes' in inputs and 'gt_class' in inputs

            out = {
                "pred_logits": outputs_logit[-1],
                "pred_coords": outputs_coord[-1],
            }
            out["aux_outputs"] = self._set_aux_loss(
                outputs_logit, outputs_coord
            )

            return self.loss(out, inputs)
        else:
            return (outputs_logit[-1], outputs_coord[-1])

    def _set_aux_loss(self, outputs_logits, outputs_coord):
        return [
            {"pred_logits": a, "pred_coords": b}
            for a, b in zip(outputs_logits[:-1], outputs_coord[:-1])
        ]
@register
class CustomizedPostProcess(object):
    __shared__ = ['num_classes', 'use_focal_loss', 'points_per_group']
    __inject__ = []

    def __init__(self,
                 num_classes=80,
                 num_top_queries=100,
                 points_per_group=14,
                 use_focal_loss=False):
        super(ChartDETRPostProcess, self).__init__()
        self.num_classes = num_classes
        self.num_top_queries = num_top_queries
        self.use_focal_loss = use_focal_loss
        self.points_per_group = points_per_group

    def __call__(self, head_out, im_shape, scale_factor):
        """
        Decode the bbox.

        Args:
            head_out (tuple): bbox_pred, cls_logit and masks of bbox_head output.
            im_shape (Tensor): The shape of the input image.
            scale_factor (Tensor): The scale factor of the input image.
        Returns:
            bbox_pred (Tensor): The output prediction with shape [N, 6], including
                labels, scores and bboxes. The size of bboxes are corresponding
                to the input image, the bboxes may be used in other branch.
            bbox_num (Tensor): The number of prediction boxes of each batch with
                shape [bs], and is N.
        """
        out_logits, out_coords = head_out

        origin_shape = paddle.floor(im_shape / scale_factor + 0.5)
        img_h, img_w = paddle.split(origin_shape, 2, axis=-1)
        origin_shape = paddle.concat(
            [img_w, img_h]*self.points_per_group, axis=-1).reshape([-1, 1, self.points_per_group*2])
        pred_coords = out_coords * origin_shape

        scores = F.sigmoid(out_logits) if self.use_focal_loss else F.softmax(out_logits)[:, :, :-1]

        if not self.use_focal_loss:
            scores, labels = scores.max(-1), scores.argmax(-1)
            if scores.shape[1] > self.num_top_queries:
                scores, index = paddle.topk(
                    scores, self.num_top_queries, axis=-1)
                batch_ind = paddle.arange(
                    end=scores.shape[0]).unsqueeze(-1).tile(
                        [1, self.num_top_queries])
                index = paddle.stack([batch_ind, index], axis=-1)
                labels = paddle.gather_nd(labels, index)
                pred_coords = paddle.gather_nd(pred_coords, index)
        else:
            scores, index = paddle.topk(
                scores.flatten(1), self.num_top_queries, axis=-1)
            labels = index % self.num_classes
            index = index // self.num_classes
            batch_ind = paddle.arange(end=scores.shape[0]).unsqueeze(-1).tile(
                [1, self.num_top_queries])
            index = paddle.stack([batch_ind, index], axis=-1)
            pred_coords = paddle.gather_nd(pred_coords, index)
        
        pred_coords = paddle.concat(
            [
                labels.unsqueeze(-1).astype('float32'), scores.unsqueeze(-1),
                pred_coords
            ],
            axis=-1)
        shape_num = paddle.to_tensor(
            pred_coords.shape[1], dtype='int32').tile([pred_coords.shape[0]])
        pred_coords = pred_coords.reshape([-1, 2+self.points_per_group*2])
        return pred_coords, shape_num
Issue submission checklist
  • I report the issue, it's not a question
  • I checked the problem with documentation, FAQ, open issues, Stack Overflow, etc and have not found solution
  • There is reproducer code and related data files: images, videos, models, etc.
@xuewenyuan xuewenyuan added bug Something isn't working support_request labels Jun 9, 2023
@ilya-lavrenov ilya-lavrenov added the category: PDPD FE OpenVINO PaddlePaddle FrontEnd label Jun 9, 2023
@avitial avitial removed the bug Something isn't working label Jun 9, 2023
@Iffa-Intel
Copy link

@xuewenyuan Could you share:

  1. Steps & commands that you implemented until the point of error for clarification
  2. Relevant model files

@xuewenyuan
Copy link
Author

@Iffa-Meah
Thank you! I have found an error about tensor indexes in my code, which caused the above problem. After I fixed it,Openvino can successfully read can compile the PDPD model.
Maybe this issue can be closed.

@Iffa-Intel
Copy link

@xuewenyuan thanks for informing & glad that your issue is resolved.
Closing issue, feel free to re-open or start a new issue if additional assistance is needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: PDPD FE OpenVINO PaddlePaddle FrontEnd support_request
Projects
None yet
Development

No branches or pull requests

5 participants