Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

infer multiple images at once #2133

Closed
3 tasks done
oym050922021 opened this issue May 30, 2023 · 7 comments
Closed
3 tasks done

infer multiple images at once #2133

oym050922021 opened this issue May 30, 2023 · 7 comments

Comments

@oym050922021
Copy link

oym050922021 commented May 30, 2023

Checklist

  • I have searched related issues but cannot get the expected help.
  • 2. I have read the FAQ documentation but cannot get the expected help.
  • 3. The bug has not been fixed in the latest version.

Describe the bug

How can I infer multiple images at once with the API named mmdeploy.apis.inference_model?

Reproduction

from mmdeploy.apis import inference_model
from glob import glob

def get_batch_image(images_list, batchsize):
images_cnt = len(images_list)
batch_cnt = int(images_cnt / batchsize)
residue = images_cnt - batch_cnt * batchsize
batch_images = []
for i in range(batch_cnt):
tmp = images_list[i*batchsize:(i+1)*batchsize]
batch_images.append(tmp)
if residue:
tmp = images_list[-residue:]
batch_images.append(tmp)
return batch_images

model_cfg = "/root/Ouyangmei/openmmlab/mmdetection/configs/yolox/yolox_s_8xb8-300e_coco.py"
deploy_cfg = 'configs/mmdet/detection/detection_onnxruntime_dynamic.py'
backend_files = ['mmdeploy_models/mmdeploy/yolox/end2end.onnx']

images_folder = '/root/Ouyangmei/openmmlab/mmdetection/datasets/teeth/images/test'
images_list = glob(images_folder + "/*.png")
batch_images = get_batch_image(images_list, batchsize = 4)
for images in batch_images:
results = inference_model(model_cfg, deploy_cfg, backend_files, images, device)

Environment

05/30 17:45:40 - mmengine - INFO - TorchVision: 0.14.1+cu117
05/30 17:45:40 - mmengine - INFO - OpenCV: 4.7.0
05/30 17:45:40 - mmengine - INFO - MMEngine: 0.7.2
05/30 17:45:40 - mmengine - INFO - MMCV: 2.0.0rc4
05/30 17:45:40 - mmengine - INFO - MMCV Compiler: GCC 9.3
05/30 17:45:40 - mmengine - INFO - MMCV CUDA Compiler: 11.7
05/30 17:45:40 - mmengine - INFO - MMDeploy: 1.1.0+faf05fe
05/30 17:45:40 - mmengine - INFO - 

05/30 17:45:40 - mmengine - INFO - **********Backend information**********
05/30 17:45:40 - mmengine - INFO - tensorrt:    None
05/30 17:45:40 - mmengine - INFO - ONNXRuntime: 1.14.1
05/30 17:45:40 - mmengine - INFO - ONNXRuntime-gpu:     None
05/30 17:45:40 - mmengine - INFO - ONNXRuntime custom ops:      Available
05/30 17:45:40 - mmengine - INFO - pplnn:       None
05/30 17:45:40 - mmengine - INFO - ncnn:        None
05/30 17:45:40 - mmengine - INFO - snpe:        None
05/30 17:45:40 - mmengine - INFO - openvino:    None
05/30 17:45:40 - mmengine - INFO - torchscript: 1.13.1
05/30 17:45:40 - mmengine - INFO - torchscript custom ops:      NotAvailable
05/30 17:45:40 - mmengine - INFO - rknn-toolkit:        None
05/30 17:45:40 - mmengine - INFO - rknn-toolkit2:       None
05/30 17:45:40 - mmengine - INFO - ascend:      None
05/30 17:45:40 - mmengine - INFO - coreml:      None
05/30 17:45:40 - mmengine - INFO - tvm: None
05/30 17:45:40 - mmengine - INFO - vacc:        None
05/30 17:45:40 - mmengine - INFO - 

05/30 17:45:40 - mmengine - INFO - **********Codebase information**********
05/30 17:45:40 - mmengine - INFO - mmdet:       3.0.0
05/30 17:45:40 - mmengine - INFO - mmseg:       None
05/30 17:45:40 - mmengine - INFO - mmpretrain:  None
05/30 17:45:40 - mmengine - INFO - mmocr:       None
05/30 17:45:40 - mmengine - INFO - mmagic:      None
05/30 17:45:40 - mmengine - INFO - mmdet3d:     None
05/30 17:45:40 - mmengine - INFO - mmpose:      None
05/30 17:45:40 - mmengine - INFO - mmrotate:    None
05/30 17:45:40 - mmengine - INFO - mmaction:    None
05/30 17:45:40 - mmengine - INFO - mmrazor:     None

Error traceback

(oym) root@h2-02:~/Ouyangmei/openmmlab/mmdeploy#  cd /root/Ouyangmei/openmmlab/mmdeploy ; /usr/bin/env /root/anaconda3/envs/oym/bin/python /root/.vscode-server/extensions/ms-python.python-2023.8.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher 59543 -- /root/Ouyangmei/openmmlab/mmdeploy/test_onnx-test.py 
05/30 17:46:30 - mmengine - WARNING - Failed to search registry with scope "mmdet" in the "Codebases" registry tree. As a workaround, the current "Codebases" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet" is a correct scope, or whether the registry is initialized.
05/30 17:46:30 - mmengine - WARNING - Failed to search registry with scope "mmdet" in the "mmdet_tasks" registry tree. As a workaround, the current "mmdet_tasks" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet" is a correct scope, or whether the registry is initialized.
05/30 17:46:30 - mmengine - WARNING - Failed to search registry with scope "mmdet" in the "backend_detectors" registry tree. As a workaround, the current "backend_detectors" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet" is a correct scope, or whether the registry is initialized.
05/30 17:46:30 - mmengine - INFO - Successfully loaded onnxruntime custom ops from /root/Ouyangmei/openmmlab/mmdeploy/mmdeploy/lib/libmmdeploy_onnxruntime_ops.so
2023-05-30 17:46:31.694697408 [E:onnxruntime:, sequential_executor.cc:494 ExecuteKernel] Non-zero status code returned while running Where node. Name:'/Where' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/math/element_wise_ops.h:540 void onnxruntime::BroadcastIterator::Init(ptrdiff_t, ptrdiff_t) axis == 1 || axis == largest was false. Attempting to broadcast an axis by a dimension other than 1. 4 by 8400

Traceback (most recent call last):
  File "/root/Ouyangmei/openmmlab/mmdeploy/test_onnx-test.py", line 81, in <module>
    results = inference_model(model_cfg, deploy_cfg, backend_files, images, device)
  File "/root/Ouyangmei/openmmlab/mmdeploy/mmdeploy/apis/inference.py", line 52, in inference_model
    result = model.test_step(model_inputs)
  File "/root/anaconda3/envs/oym/lib/python3.9/site-packages/mmengine/model/base_model/base_model.py", line 145, in test_step
    return self._run_forward(data, mode='predict')  # type: ignore
  File "/root/anaconda3/envs/oym/lib/python3.9/site-packages/mmengine/model/base_model/base_model.py", line 326, in _run_forward
    results = self(**data, mode=mode)
  File "/root/anaconda3/envs/oym/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/Ouyangmei/openmmlab/mmdeploy/mmdeploy/codebase/mmdet/deploy/object_detection_model.py", line 192, in forward
    outputs = self.predict(inputs)
  File "/root/Ouyangmei/openmmlab/mmdeploy/mmdeploy/codebase/mmdet/deploy/object_detection_model.py", line 292, in predict
    outputs = self.wrapper({self.input_name: imgs})
  File "/root/anaconda3/envs/oym/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/Ouyangmei/openmmlab/mmdeploy/mmdeploy/backend/onnxruntime/wrapper.py", line 97, in forward
    self.__ort_execute(self.io_binding)
  File "/root/Ouyangmei/openmmlab/mmdeploy/mmdeploy/utils/timer.py", line 67, in fun
    result = func(*args, **kwargs)
  File "/root/Ouyangmei/openmmlab/mmdeploy/mmdeploy/backend/onnxruntime/wrapper.py", line 113, in __ort_execute
    self.sess.run_with_iobinding(io_binding)
  File "/root/anaconda3/envs/oym/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 287, in run_with_iobinding
    self._sess.run_with_iobinding(iobinding._iobinding, run_options)
RuntimeError: Error in execution: Non-zero status code returned while running Where node. Name:'/Where' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/math/element_wise_ops.h:540 void onnxruntime::BroadcastIterator::Init(ptrdiff_t, ptrdiff_t) axis == 1 || axis == largest was false. Attempting to broadcast an axis by a dimension other than 1. 4 by 8400
@RunningLeon
Copy link
Collaborator

@oym050922021 hi, have you changed anything in yolox model config?

@oym050922021
Copy link
Author

oym050922021 commented May 31, 2023

@oym050922021 hi, have you changed anything in yolox model config?

@hi, i changed the type of loss funtion in yolox model conifg. I changed the line 99 code in mmdeploy/mmdeploy/codebase/mmdet/models/dense_heads/yolox_head.py and it worked.

Snipaste_2023-05-31_11-22-44

@RunningLeon
Copy link
Collaborator

@oym050922021 hi, glad to know it worked. Actually, we could remove line 97-99 since the score filtering would be done in nms. Could you kindly give us a PR to fix it?

@oym050922021
Copy link
Author

@RunningLeon hi, thank you very much for your reply. what does PR stand for please?

@RunningLeon
Copy link
Collaborator

Pull Request. You could fork our repo and contribute your code to us.

@github-actions
Copy link

github-actions bot commented Jun 8, 2023

This issue is marked as stale because it has been marked as invalid or awaiting response for 7 days without any further response. It will be closed in 5 days if the stale label is not removed or if there is no further response.

@github-actions github-actions bot added the Stale label Jun 8, 2023
@github-actions
Copy link

This issue is closed because it has been stale for 5 days. Please open a new issue if you have similar issues or you have any new updates now.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants