Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[tests/onnx] Add onnx and model out check #1569

Merged
merged 4 commits into from
Apr 30, 2024

Conversation

felixdittrich92
Copy link
Contributor

@felixdittrich92 felixdittrich92 commented Apr 29, 2024

This PR:

  • extend the test with missing allclose check for onnx export - non critical - only warn
  • rs will output also why tests was skipped (only as information for us)

@felixdittrich92 felixdittrich92 added ext: tests Related to tests folder framework: pytorch Related to PyTorch backend framework: tensorflow Related to TensorFlow backend topic: onnx ONNX-related type: misc Miscellaneous labels Apr 29, 2024
@felixdittrich92 felixdittrich92 added this to the 0.9.0 milestone Apr 29, 2024
@felixdittrich92 felixdittrich92 self-assigned this Apr 29, 2024
@felixdittrich92
Copy link
Contributor Author

CC @llFireHawkll

Copy link

codecov bot commented Apr 29, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 95.96%. Comparing base (5568612) to head (c7bb257).
Report is 2 commits behind head on main.

❗ Current head c7bb257 differs from pull request most recent head d7d33c8. Consider uploading reports for the commit d7d33c8 to get more accurate results

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1569      +/-   ##
==========================================
+ Coverage   95.86%   95.96%   +0.09%     
==========================================
  Files         163      163              
  Lines        7649     7655       +6     
==========================================
+ Hits         7333     7346      +13     
+ Misses        316      309       -7     
Flag Coverage Δ
unittests 95.96% <ø> (+0.09%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Collaborator

@odulcy-mindee odulcy-mindee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find the output misleading: https://github.com/mindee/doctr/actions/runs/8875628926/job/24365456870?pr=1569

=========================== short test summary info ============================
SKIPPED [1] tests/pytorch/test_models_detection_pt.py:183: Output of db_mobilenet_v3_large:
Difference is 9.999999747378752e-06
SKIPPED [2] tests/pytorch/test_models_recognition_pt.py:116: too less memory
=========== 235 passed, 3 skipped, 30 warnings in 302.91s (0:05:02) ============

Value seems really tiny but it's a mean and not the maximum difference observed, as computed before.
Maybe you can write as message that the outputs are not element-wise equal within a tolerance and then provide this mean difference or return the max diff you observe between two arrays.

BTW, why change the results of this test as skipped ?

@felixdittrich92
Copy link
Contributor Author

felixdittrich92 commented Apr 30, 2024

I find the output misleading: https://github.com/mindee/doctr/actions/runs/8875628926/job/24365456870?pr=1569

=========================== short test summary info ============================
SKIPPED [1] tests/pytorch/test_models_detection_pt.py:183: Output of db_mobilenet_v3_large:
Difference is 9.999999747378752e-06
SKIPPED [2] tests/pytorch/test_models_recognition_pt.py:116: too less memory
=========== 235 passed, 3 skipped, 30 warnings in 302.91s (0:05:02) ============

Value seems really tiny but it's a mean and not the maximum difference observed, as computed before. Maybe you can write as message that the outputs are not element-wise equal within a tolerance and then provide this mean difference or return the max diff you observe between two arrays.

BTW, why change the results of this test as skipped ?

Yeah we can update the message :)
skipped is only in this special case (because we don't want that it fails here) otherwise the test will still fail or pass as before :)

Max diff is a good idea 👍

odulcy-mindee
odulcy-mindee previously approved these changes Apr 30, 2024
@felixdittrich92
Copy link
Contributor Author

felixdittrich92 commented Apr 30, 2024

@odulcy-mindee missed to add 1 line xD Now you can approve and merge ^^

@odulcy-mindee odulcy-mindee merged commit 46d5974 into mindee:main Apr 30, 2024
72 of 79 checks passed
@llFireHawkll
Copy link

Great thanks @felixdittrich92 for this update. Really appreciate your help.

@felixdittrich92 felixdittrich92 deleted the onnx-close-tests branch April 30, 2024 17:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ext: tests Related to tests folder framework: pytorch Related to PyTorch backend framework: tensorflow Related to TensorFlow backend topic: onnx ONNX-related type: misc Miscellaneous
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants