-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding inference to Linear Regression model #6672
Conversation
print("Now performing inference...") | ||
fluid.io.load_persistables(exe, "./fit_a_line.model/") | ||
for data in test_reader(): | ||
out, y_pred, y_label = exe.run(fluid.default_main_program(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You must clone the main_program first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please refer to https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/v2/fluid/tests/book/test_recognize_digits_mlp.py#L35.
We need to create a inference program first.
After discussed with @reyoung @QiJune , we believe that we should provide a general interface of train/test. Which means for most users, the framework will do the clone inside. |
Thanks for pointing that out @dzhwinter and @QiJune . |
Adding inference to the fluid
fit_a_line
model, this will help us when we re-write the models for the book.It will also help @kexinzhao and me understand how inference works, before we start working on inference wrapper for Mobile.
We have a few questions:
exe.run
during inference ? (Should we clone thefluid.default_main_program()
before we start training or is the method I used here, correct ? )