-
Notifications
You must be signed in to change notification settings - Fork 130
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for onnx:CumSum to improve Onnx results #66
Comments
Oh, I didn't notice that the error was about the type of the argument to I'll try this later once I get access to my laptop. |
@SirMomster Have you tested this fix yet? I'm still experiencing the inconsistent results at the end of tokens like was seen in #12. |
@jturner116 I will be trying it out asap, will post an update here. |
@baudm was the torch hub version automatically updated and do they include your change or do I need to build from source? |
Torch Hub caches the repo upon first use. Use parseq = torch.hub.load('baudm/parseq', 'parseq', pretrained=True, force_reload=True).eval() |
I am still getting the following warnings on the conversion with the fresh torch.hub download I'm using torch==1.13.1+cu117 Curious to see if this resembles your experience @SirMomster :) |
if testing and (tgt_in == self.eos_id).any(dim=-1).all(): If I use a sufficiently long word as the dummy input (I actually used Verbandsteffe), the AssertionErrors go away for all of the images in DemoImages. Maybe if you use an image with a word with the maximum character count, the outputs will be stable? EDIT: In my additional testing, it does seem that inference shorter than the dummy input used is stable |
This is a follow-up on #12.
We noticed a big difference in detected results between the onnx and torch model inference. And believe this might have to do with the fact that we add refine_iters=0 thus skipping refinement iterations.
According to the following remark by @baudm #12 (comment)
The issue is with CumSum method in Onnx, which according to the error:
InferenceError: [ShapeInferenceError] (op_type:CumSum, node name: CumSum_2527): x typestr: T, has unsupported type: tensor(bool)
Uses an unsupported type tensor(bool) which is correct as can be found here: https://github.com/onnx/onnx/blob/main/docs/Operators.md#CumSum
Would it be possible to change the type and thus altering
tgt_padding_mask = ((tgt_in == self.eos_id).cumsum(-1) > 0)
to return a supported type?To my limited understanding, this would increase accuracy of the onnx exported model? As besides this part both models do more or less the same?
The text was updated successfully, but these errors were encountered: