Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add script for accuracy validation #11

Merged
merged 3 commits into from
Apr 12, 2018
Merged

Conversation

kuke
Copy link

@kuke kuke commented Apr 9, 2018

Resolve #5

To run this validation script, need to install Caffe2 to use its ONNX backend for the backend in the onnx/onnx repo has a bug when executing run_model(). And at the same time, need to install the latest ONNX from source, otherwise some other problems will happen.

@kuke
Copy link
Author

kuke commented Apr 9, 2018

E.g. to verify the fit_a_line model conversion, run

python validate.py --fluid_model ./extras/fit_a_line.inference.model/ --onnx_model ./fit_a_line.onnx

and get the output

-----------  Configuration Arguments -----------
a: 0.0
b: 1.0
batch_size: 10
expected_decimal: 6
fluid_model: ./extras/fit_a_line.inference.model/
onnx_model: ./fit_a_line.onnx
------------------------------------------------
Inference results for fluid model:
[array([[11.572977],
       [15.133337],
       [13.47306 ],
       [13.345606],
       [15.656847],
       [13.662115],
       [16.10838 ],
       [12.218645],
       [14.332058],
       [11.419915]], dtype=float32)]


Inference results for ONNX model:
Outputs(_0=array([[11.572977],
       [15.133337],
       [13.47306 ],
       [13.345606],
       [15.656848],
       [13.662115],
       [16.10838 ],
       [12.218645],
       [14.332058],
       [11.419915]], dtype=float32))


The exported model achieves 5-decimal precision.

Copy link
Collaborator

@varunarora varunarora left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is really well done.

"--fluid_model",
required=True,
help="The path to PaddlePaddle Fluid model.")
parser.add_argument(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know one is input and the other is output, but in #9, this arg is named --onnx_model_dir. Let's just make it consistent

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. I will modify the argument in #9


import paddle.fluid as fluid
from onnx import helper, checker, load
from caffe2.python.onnx.backend import Caffe2Backend
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add the requirement for this in our requirements.txt?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caffe2 cannot be installed by pip and its installation is not that direct without Anaconda environment.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, just add some notes to the README then

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK

for var_name in feed_target_names
]
input_shapes = [
shape if shape[0] > 0 else (args.batch_size, ) + shape[1:]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this use paddle_onnx_shape or is this completely different

Copy link
Author

@kuke kuke Apr 10, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

paddle_onnx_shape only changes the unknown dim from -1 to 0 (required by ONNX). And this line is used to set the batch size.

@pkuyym
Copy link
Collaborator

pkuyym commented Apr 10, 2018

Excellent Job. Some questions:

  1. FC --> matmul ? why not FC --> FC ?
  2. Current weight FC.W and FC.b using a 'Constant' type, I think we should consider this field
    https://github.com/onnx/onnx/blob/master/onnx/onnx-ml.proto3#L242 which should be a general solution.

@kuke
Copy link
Author

kuke commented Apr 10, 2018

@pkuyym

  1. No FC op in Fluid

  2. I cannot find the proper way to initialize tensor, and using Constant op to store parameters also makes sense.

@varunarora
Copy link
Collaborator

varunarora commented Apr 11, 2018

Hey @kuke, isnt there an FC in fluid? http://paddlepaddle.org/docs/develop/api/fluid/en/layers.html#permalink-36-fc. Is this actually not an op?

@kuke
Copy link
Author

kuke commented Apr 12, 2018

Copy link
Collaborator

@pkuyym pkuyym left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@kuke kuke merged commit b137393 into PaddlePaddle:master Apr 12, 2018
Zeref996 pushed a commit to Zeref996/Paddle2ONNX that referenced this pull request Aug 24, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants