-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]RuntimeError: CumSum layer with name 'p2o.CumSum.0' #16153
Comments
|
Do you have any solution if I insist keeping the model unchanged? |
will think about how to fix it |
@meiyang-intel Thanks!Hope OpenVINO can solve this problem quickly so that PaddleSpeech Models can be used with OpenVINO on Intel CPU.Keep in touch! |
In order to support the PaddleSpeech model in OpenVINO, there are two way. One direct way is Paddle formatted model input (*pdmodel), and the other way is Paddle -> ONNX, and ONNX format as OV input. Currently, 3 ops (sine, cosine, setvalue) need to enable in Paddle Frontend in order to directly support Paddle format input. |
@JiehangXie Checked with PaddlePaddle team, Paddle operator of set_value (with step!=1) cannot be converted to ONNX op correctly with current paddle2onnx tool. It's a blocking issue of the inferring model in other framework/backend. Could you please also raise the ticket in PaddlePaddle community? |
@yuxu42 We have already converted .pdmodel and .pdiparams to ONNX model, and most of TTS models in PaddleSpeech can be converted successfully with Paddle2ONNX, you can try to use this model for test: https://drive.google.com/file/d/1ci6wMuPb6IWLhkBK7CpiNH8KSUAb-35Y/view?usp=sharing, |
I met the same problem when compiling a DETR model. Have you solve it ?
The onnx file is converted by paddle2onnx and the onnx can successfully run under onnxruntime |
That will help us address the issue quickly. |
1. Packages
2. Install PaddleDetection followed README3. Export paddle modelDownload official pretrained deformable detr (deformable_detr_r50_1x_coco.pdparams) 4. Convert to ONNX
5. Convert ONNX to OpenVINO IR
6. Run ONNX model in ONNX runtime (No problem)
6. Run ONNX model in OpenVINO runtime (RuntimeError)
7. Run ONNX model in OpenVINO runtime (RuntimeError)
|
@xuewenyuan Could you compile the openvino with the above PR and then try to load how to compile: https://github.com/openvinotoolkit/openvino/blob/master/docs/dev/build_linux.md |
@xczhai |
@xuewenyuan |
@xczhai
|
|
@xczhai
However, an error occurred when I tested an image.
onnxruntime has no problem for this inputs |
@xuewenyuan
|
Ref. 111483 |
@xczhai |
@xuewenyuan
If any problem, please reach us. |
@xczhai
|
@xuewenyuan |
Closing this as last PR addressing the issue has been merged to master branch. |
System information (version)
fastspeech2 model trained by PaddlePaddle convert to OpenVINO
I have tried to use both paddle and onnx format model.
When used paddle format model, it will cause:
vector too long.
When used ONNX format model, it will cause:
"CumSum layer with name 'p2o.CumSum.0' doesn't support 'axis' input tensor with non scalar rank"
All models and inference code has been uploaded to BaiduNetdisk and Google Drive:
https://pan.baidu.com/s/1u_aNmiWz8UflW3l2m5cSCg?pwd=quck
https://drive.google.com/file/d/1ci6wMuPb6IWLhkBK7CpiNH8KSUAb-35Y/view?usp=sharing
Thank you you all to have a check!
2023.3.8: try #14961 and #14993, it seems not work.
RuntimeError: CPU plug-in doesn't support Squeeze node with inputs num equal 1
The text was updated successfully, but these errors were encountered: