Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Interpretation script for Yolo's region based output #683

Closed
alontrais opened this issue Sep 1, 2019 · 7 comments
Closed

Interpretation script for Yolo's region based output #683

alontrais opened this issue Sep 1, 2019 · 7 comments
Labels
documentation Documentation should be updated Easy to fix The issue is easy to fix and probably it will be release in a next minor release enhancement New feature or request

Comments

@alontrais
Copy link

Hello I am new to CVAT, I use openvino to run auto annotation, I want to use YoloV3 for this mission in CVAT. I converted Yolo model to OpenVINO format and created xml and bin files.
Now I need to write interpretation python script for Yolo's region based output. How can I do that?
Is there an interrupt file from tensorflow models to openvino?

@nmanovic nmanovic added documentation Documentation should be updated enhancement New feature or request Easy to fix The issue is easy to fix and probably it will be release in a next minor release good first issue labels Sep 1, 2019
@nmanovic nmanovic added this to the Backlog milestone Sep 1, 2019
@gwestner94
Copy link

I have the same issue, are there any existing approaches/ scripts for using yolov3 models?

@azhavoro
Copy link
Contributor

azhavoro commented Sep 3, 2019

@gwestner94 Hi, example how to use OpenVINO output from Yolov3 you can find here https://github.com/opencv/open_model_zoo/blob/master/demos/python_demos/object_detection_demo_yolov3_async/object_detection_demo_yolov3_async.py

@jimwormold
Copy link

@gwestner94 Hi, example how to use OpenVINO output from Yolov3 you can find here https://github.com/opencv/open_model_zoo/blob/master/demos/python_demos/object_detection_demo_yolov3_async/object_detection_demo_yolov3_async.py

Thanks for this response, but it is really not quite as simple as that as no imports are available as I understand it within CVAT. If I compare the example interpreter for the SSD (in https://github.com/opencv/cvat/blob/develop/cvat/apps/auto_annotation/README.md) and the equivalent ssd_async demo (https://github.com/opencv/open_model_zoo/blob/master/demos/python_demos/object_detection_demo_ssd_async/object_detection_demo_ssd_async.py), again, this is not entirely straightforward and YOLO looks considerably more effort.

I appreciate I may be able to use the debugger in https://github.com/opencv/cvat/tree/develop/utils/auto_annotation, but I am currently running in a docker container and this again is not entirely obvious how to achieve.

If you could post an interp.py for yolov3 that would be very helpful!

@benhoff
Copy link
Contributor

benhoff commented Oct 23, 2019

See interpret script here: #794

@benhoff
Copy link
Contributor

benhoff commented Oct 23, 2019

Thanks for this response, but it is really not quite as simple as that as no imports are available as I understand it within CVAT. If I compare the example interpreter for the SSD (in https://github.com/opencv/cvat/blob/develop/cvat/apps/auto_annotation/README.md) and the equivalent ssd_async demo (https://github.com/opencv/open_model_zoo/blob/master/demos/python_demos/object_detection_demo_ssd_async/object_detection_demo_ssd_async.py), again, this is not entirely straightforward and YOLO looks considerably more effort.

It was considerably more effort :)
Let me know if you run into any issues.

@benhoff
Copy link
Contributor

benhoff commented Oct 31, 2019

I'd recommend closing this now

@nmanovic
Copy link
Contributor

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Documentation should be updated Easy to fix The issue is easy to fix and probably it will be release in a next minor release enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

6 participants