-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problems when transforming dynamic input models and quantifying static models #729
Comments
This is not a solution. I only record what I have tried because I don't have time to work on it. This model will probably terminate abnormally for all inferences except for those with an input resolution that is a multiple of 64.
|
@PINTO0309 Thank you for your quick reply!!! Did you try to show that although the onnx model supports integer multimultials of 64/128 (which is considered dynamic input), it must be fixed to static values when converting to TFlite As you have tried, I can successfully get the tflite model of float32 if I fix the model to static input, the static model I uploaded to google drive is (256*512). And, to use quantified commands, I fixed another input directly to 0.5 in the model, then change the input to two images. |
No. To be precise, since the dynamic input size model cannot determine the correct dimensional position during the conversion process, a dummy-sized tentative inference tensor is generated at the beginning of the onnx2tf process and used for shape estimation. There is no guarantee that the input tensor is an image, and it is necessary to assume all types of tensor other than 4D, such as audio data and sensor data, so if there are undefined dimensions in the shape of the ONNX input tensor, a fixed size 1 is set and dummy inference is performed. onnx2tf/onnx2tf/utils/common_functions.py Lines 3764 to 3786 in ff346ed
Therefore, in this case, since the test data This cannot be fixed immediately. A function that allows users to specify hints about the tensor shape, such as
needs to be added, and it will take a long time to implement. TFLite (LiteRT) supports dynamic tensor inference. |
As you say, it seems that an error occurs even when it is fixed to a static shape. It is probably a bug in the dimension judgment processing of
|
My original model came from I modified the IFNet file in the 4.25lite version because some of these operations cannot be directly converted to onnx. The static model I uploaded with replace.json can successfully convert 32float without much loss of accuracy. |
Fix only static shape model.
https://github.com/PINTO0309/onnx2tf/releases/tag/1.26.4
|
Thanks for your help! |
Issue Type
Others
OS
Linux
onnx2tf version number
ooonx2tf -1.26.3
onnx version number
1.16.1
onnxruntime version number
1.18.1
onnxsim (onnx_simplifier) version number
0.4.33
tensorflow version number
2.18.0
Download URL for ONNX
https://drive.google.com/drive/folders/1BWeNDI2PMmORZqT-ZPkkrmozNMgGxWX5?usp=drive_link
Parameter Replacement JSON
Description
Hi @PINTO0309 , thanks for all of your great work.
I am trying to convert an ONNX model with dynamic inputs to TFlite.
I have 3 problems.
1. The problem of Concat operator is difficult to solve.
I use the command:
onnx2tf -i dynamics_rife.onnx
And get
Do you have a solution to this problem?
2. When I use static ONNX model, the concat can work but got another error:
I refer to this link to write a json file to make changes https://github.com/PINTO0309/onnx2tf/issues/103
I had a strange problem, I couldn't convert x(onnx::Cast_415) to (1, 256, 512, 2), so I chose to convert y(onnx____Add_431) and output(onnx____Transpose_432) to get the correct result, as follows
After modifying all the Add I was able to get TFlite output, but it was much larger than the original onnx model. This seems to be a gridsample problem
onnx2tf -i static_rife_sim.onnx -prf replace.json -oiqt
The text was updated successfully, but these errors were encountered: