Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] KeyError: 'mmpretrain is not in the Codebases registry. #2077

Closed
3 tasks done
ATang0729 opened this issue May 14, 2023 · 8 comments
Closed
3 tasks done

[Bug] KeyError: 'mmpretrain is not in the Codebases registry. #2077

ATang0729 opened this issue May 14, 2023 · 8 comments
Assignees

Comments

@ATang0729
Copy link

ATang0729 commented May 14, 2023

Checklist

  • I have searched related issues but cannot get the expected help.
  • 2. I have read the FAQ documentation but cannot get the expected help.
  • 3. The bug has not been fixed in the latest version.

Describe the bug

  • I want to convert my .pth file built by mmpretrain to an onnx model
  • I followed the installation instructions in Linux-x86_64 with only cpu
  • However, when I tried to convert a ResNeXt101 model to onnx, the following error happened:
    57f0f79765679eeec91460c6c5e0e39
  • I have installed the latest version of mmpretrain and mmdeploy.
  • When I ran the example code, the error still came to occur.

Reproduction

In order to reproduce easier for u. u can just run the example code as shown below.

cd mmdeploy

# download resnet18 model from mmpretrain model zoo
mim download mmpretrain --config resnet18_8xb32_in1k --dest .

# convert mmpretrain model to onnxruntime model with dynamic shape
python tools/deploy.py \
    configs/mmpretrain/classification_onnxruntime_dynamic.py \
    resnet18_8xb32_in1k.py \
    resnet18_8xb32_in1k_20210831-fbbb1da6.pth \
    tests/data/tiger.jpeg \
    --work-dir mmdeploy_models/mmpretrain/ort \
    --device cpu \
    --show \
    --dump-info

I am sure that I didn't make any modifications on the code or config.

Environment

(base) [root@VM-4-17-centos mmdeploy]# python tools/check_env.py
05/15 01:10:49 - mmengine - INFO - 

05/15 01:10:49 - mmengine - INFO - **********Environmental information**********
05/15 01:10:51 - mmengine - INFO - sys.platform: linux
05/15 01:10:51 - mmengine - INFO - Python: 3.8.3 (default, May 19 2020, 18:47:26) [GCC 7.3.0]
05/15 01:10:51 - mmengine - INFO - CUDA available: False
05/15 01:10:51 - mmengine - INFO - numpy_random_seed: 2147483648
05/15 01:10:51 - mmengine - INFO - GCC: gcc (GCC) 11.2.0
05/15 01:10:51 - mmengine - INFO - PyTorch: 2.0.0
05/15 01:10:51 - mmengine - INFO - PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.0, USE_CUDA=0, USE_CUDNN=OFF, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, 

05/15 01:10:51 - mmengine - INFO - TorchVision: 0.15.0
05/15 01:10:51 - mmengine - INFO - OpenCV: 4.7.0
05/15 01:10:51 - mmengine - INFO - MMEngine: 0.7.3
05/15 01:10:51 - mmengine - INFO - MMCV: 2.0.0
05/15 01:10:51 - mmengine - INFO - MMCV Compiler: GCC 7.3
05/15 01:10:51 - mmengine - INFO - MMCV CUDA Compiler: not available
05/15 01:10:51 - mmengine - INFO - MMDeploy: 1.0.0+26b66ef
05/15 01:10:51 - mmengine - INFO - 

05/15 01:10:51 - mmengine - INFO - **********Backend information**********
05/15 01:10:51 - mmengine - INFO - tensorrt:    None
05/15 01:10:51 - mmengine - INFO - ONNXRuntime: None
05/15 01:10:51 - mmengine - INFO - pplnn:       None
05/15 01:10:51 - mmengine - INFO - ncnn:        None
05/15 01:10:51 - mmengine - INFO - snpe:        None
05/15 01:10:51 - mmengine - INFO - openvino:    None
05/15 01:10:51 - mmengine - INFO - torchscript: 2.0.0
05/15 01:10:51 - mmengine - INFO - torchscript custom ops:      NotAvailable
05/15 01:10:51 - mmengine - INFO - rknn-toolkit:        None
05/15 01:10:51 - mmengine - INFO - rknn-toolkit2:       None
05/15 01:10:51 - mmengine - INFO - ascend:      None
05/15 01:10:51 - mmengine - INFO - coreml:      None
05/15 01:10:51 - mmengine - INFO - tvm: None
05/15 01:10:51 - mmengine - INFO - vacc:        None
05/15 01:10:51 - mmengine - INFO - 

05/15 01:10:51 - mmengine - INFO - **********Codebase information**********
05/15 01:10:51 - mmengine - INFO - mmdet:       3.0.0
05/15 01:10:51 - mmengine - INFO - mmseg:       None
05/15 01:10:51 - mmengine - INFO - mmcls:       None
05/15 01:10:51 - mmengine - INFO - mmocr:       None
05/15 01:10:51 - mmengine - INFO - mmedit:      None
05/15 01:10:51 - mmengine - INFO - mmdet3d:     None
05/15 01:10:51 - mmengine - INFO - mmpose:      1.0.0
05/15 01:10:51 - mmengine - INFO - mmrotate:    None
05/15 01:10:51 - mmengine - INFO - mmaction:    None
05/15 01:10:51 - mmengine - INFO - mmrazor:     None

Error traceback

05/15 01:07:26 - mmengine - WARNING - Failed to get codebase, got: 'Cannot get key by value "mmpretrain" of <enum \'Codebase\'>'. Then export a new codebase in Codebase MMPRETRAIN: mmpretrain
05/15 01:07:26 - mmengine - WARNING - Import mmdeploy.codebase.mmpretrain.deploy failedPlease check whether the module is the custom module.No module named 'mmdeploy.codebase.mmpretrain'
Traceback (most recent call last):
  File "tools/deploy.py", line 335, in <module>
    main()
  File "tools/deploy.py", line 129, in main
    export2SDK(
  File "/root/miniconda3/lib/python3.8/site-packages/mmdeploy/backend/sdk/export_info.py", line 347, in export2SDK
    deploy_info = get_deploy(deploy_cfg, model_cfg, work_dir, device)
  File "/root/miniconda3/lib/python3.8/site-packages/mmdeploy/backend/sdk/export_info.py", line 262, in get_deploy
    _, customs = get_model_name_customs(
  File "/root/miniconda3/lib/python3.8/site-packages/mmdeploy/backend/sdk/export_info.py", line 61, in get_model_name_customs
    task_processor = build_task_processor(
  File "/root/miniconda3/lib/python3.8/site-packages/mmdeploy/apis/utils/utils.py", line 46, in build_task_processor
    import_codebase(codebase_type, custom_module_list)
  File "/root/miniconda3/lib/python3.8/site-packages/mmdeploy/codebase/__init__.py", line 35, in import_codebase
    codebase = get_codebase_class(codebase_type)
  File "/root/miniconda3/lib/python3.8/site-packages/mmdeploy/codebase/base/mmcodebase.py", line 86, in get_codebase_class
    return CODEBASE.build({'type': codebase.value})
  File "/root/miniconda3/lib/python3.8/site-packages/mmengine/registry/registry.py", line 548, in build
    return self.build_func(cfg, *args, **kwargs, registry=self)
  File "/root/miniconda3/lib/python3.8/site-packages/mmengine/registry/build_functions.py", line 100, in build_from_cfg
    raise KeyError(
KeyError: 'mmpretrain is not in the Codebases registry. Please check whether the value of `mmpretrain` is correct or it was registered as expected. More details can be found at https://mmengine.readthedocs.io/en/latest/advanced_tutorials/config.html#import-the-custom-module'
@ATang0729
Copy link
Author

well, i notice ONNXRuntime: None in the environment section. But I have installed the ONNXRuntime using the following codes:

wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-1.8.1.tgz
tar -zxvf onnxruntime-linux-x64-1.8.1.tgz
export ONNXRUNTIME_DIR=$(pwd)/onnxruntime-linux-x64-1.8.1
export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH

And I can make it when converting a .pth model built by mmpose to onnx
So I guess that ONNXRuntime: None isn't the problem

@RunningLeon
Copy link
Collaborator

@ATang0729 Hi, mmpretrain is support after this PR #2003. You have to install mmdeploy from source.

@RunningLeon RunningLeon self-assigned this May 15, 2023
@ATang0729
Copy link
Author

thx. I solved this problem. I installed mmdeploy both as a python package and from source originally. But my putting mmdeploy's source code and folder that contains my .pth file in two different roots led to the problem.

One more question, why don't we just install onnxruntime by pip install onnxruntime instead of:

wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-1.8.1.tgz
tar -zxvf onnxruntime-linux-x64-1.8.1.tgz
export ONNXRUNTIME_DIR=$(pwd)/onnxruntime-linux-x64-1.8.1
export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH

(source: https://github.com/open-mmlab/mmdeploy/blob/main/docs/en/get_started.md?plain=1#L145)
I can't figure out what these codes are used for and I'm wondering whether you have missed pip install onnxruntime in the document

@RunningLeon
Copy link
Collaborator

@ATang0729 For build custom ops in mmdeploy for onnxruntime. If your model do not have custom ops, this part can be skipped.

thx. I solved this problem. I installed mmdeploy both as a python package and from source originally. But my putting mmdeploy's source code and folder that contains my .pth file in two different roots led to the problem.

One more question, why don't we just install onnxruntime by pip install onnxruntime instead of:

wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-1.8.1.tgz
tar -zxvf onnxruntime-linux-x64-1.8.1.tgz
export ONNXRUNTIME_DIR=$(pwd)/onnxruntime-linux-x64-1.8.1
export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH

(source: https://github.com/open-mmlab/mmdeploy/blob/main/docs/en/get_started.md?plain=1#L145) I can't figure out what these codes are used for and I'm wondering whether you have missed pip install onnxruntime in the document

@6sz
Copy link

6sz commented May 17, 2023

I have also encountered the same problem. How did I solve it?

Do I need to install MMDeploy as source code?

https://github.com/open-https://github.com/open -mmlab/mmdeploy/blob/main/docs/en/01-how-to-build/linux-x86_ 64.md

You can still follow get_ Take the latest precompiled package as example in started.md, install.

Are MMDeploy and MMPretrain placed in the same directory or subdirectory?

@RunningLeon
Copy link
Collaborator

I have also encountered the same problem. How did I solve it?

Do I need to install MMDeploy as source code?

https://github.com/open-https://github.com/open -mmlab/mmdeploy/blob/main/docs/en/01-how-to-build/linux-x86_ 64.md

You can still follow get_ Take the latest precompiled package as example in started.md, install.

Are MMDeploy and MMPretrain placed in the same directory or subdirectory?

@6sz hi, yes. You need to install mmdeploy from source. you can install mmpretrain with mim install mmpretrain. If you clone mmpretrain, you could put it any where.

@6sz
Copy link

6sz commented May 17, 2023

I have also encountered the same problem. How did I solve it?
Do I need to install MMDeploy as source code?
https://github.com/open-https://github.com/open -mmlab/mmdeploy/blob/main/docs/en/01-how-to-build/linux-x86_ 64.md
You can still follow get_ Take the latest precompiled package as example in started.md, install.
Are MMDeploy and MMPretrain placed in the same directory or subdirectory?

@6sz hi, yes. You need to install mmdeploy from source. you can install mmpretrain with mim install mmpretrain. If you clone mmpretrain, you could put it any where.

Thank you very much for your reply. At present, I have installed MMDetection and MMPetrain, and they are working properly. I have followed the process to create a Conda virtual environment and installed mmcv. My next step should be to install according to "build MMDeploy from source" instead of installing according to "Linux x86_64". Is this right?
image

@ATang0729
Copy link
Author

Thank you very much for your reply. At present, I have installed MMDetection and MMPetrain, and they are working properly. I have followed the process to create a Conda virtual environment and installed mmcv. My next step should be to install according to "build MMDeploy from source" instead of installing according to "Linux x86_64". Is this right?

Yes. But I guess that after "build MMDeploy from source", you still need to run pip install mmdeploy-runtime==1.0.0 or pip install mmdeploy-runtime-gpu==1.0.0

⚠️ And don't forget to run pip install onnxruntime if you don't use your GPU as it isn't been mentioned in the document.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants