Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Validation for converted models for accuracy #5

Closed
varunarora opened this issue Apr 4, 2018 · 3 comments
Closed

Validation for converted models for accuracy #5

varunarora opened this issue Apr 4, 2018 · 3 comments
Assignees

Comments

@varunarora
Copy link
Collaborator

varunarora commented Apr 4, 2018

The goal would be to get identical inference results to Paddle models and inference. @kuke has more thoughts. And here is a notebook on the topic: https://github.com/onnx/tutorials/blob/master/tutorials/CorrectnessVerificationAndPerformanceComparison.ipynb.

Q: what runtime would we use for ONNX: TensorRT?

@kuke
Copy link

kuke commented Apr 5, 2018

A: I think mainly the server, and TensorRT is only one possible runtime environment.

@abhinavarora abhinavarora self-assigned this Apr 5, 2018
@kuke kuke self-assigned this Apr 9, 2018
@varunarora
Copy link
Collaborator Author

@kuke The server? Do you mean a Fluid ONNX backend? Because we won't have one for a while - which is why I guess you used Caffe2. Is there a reason you chose Caffe2?

@kuke
Copy link

kuke commented Apr 12, 2018

Yes, recently many users show interests in converting the models from Fluid to ONNX, then run with other backends on PC or mobile devices.

I used Caffe2 because the built-in backend in ONNX doesn't work, here is the issue.

Zeref996 pushed a commit to Zeref996/Paddle2ONNX that referenced this issue Aug 24, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants