-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference results visual comparison #316
Inference results visual comparison #316
Conversation
Results are written to a file after the benchmark complete Signed-off-by: Igor Davidyuk <igor.davidyuk@intel.com>
Signed-off-by: Igor Davidyuk <igor.davidyuk@intel.com>
Signed-off-by: Igor Davidyuk <igor.davidyuk@intel.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work! I left a few minor comments
geti_sdk/benchmarking/benchmarker.py
Outdated
writer.writerow(result_row) | ||
|
||
# Write results to file | ||
with open(results_file, "w", newline="") as csvfile: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The results are written to file after each benchmarking run, to avoid data loss in case of a crash. I'd prefer to keep it like that unless there's a very good reason to change it, the overhead is not that big so in my opinion it's worth it to prevent data loss.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All parts of the code where we can get a crash are already under the try
context, so we just keep the file open for an extended period. Anyway I can revert this one if you think that's right
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yeah I see what you mean. Maybe just re-open the file after each run then, to avoid keeping it open the whole time
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
geti_sdk/benchmarking/benchmarker.py
Outdated
workspace_id=self.geti.workspace_id, | ||
project=self.project, | ||
) | ||
sc_image = image_client.upload_image(numpy_image) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The method PredictionClient.predict_image
will do image upload and prediction in one line, could save a couple of lines of code here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done!
Signed-off-by: Igor Davidyuk <igor.davidyuk@intel.com>
Signed-off-by: Igor Davidyuk <igor.davidyuk@intel.com>
Signed-off-by: Igor Davidyuk <igor.davidyuk@intel.com>
Signed-off-by: Igor Davidyuk <igor.davidyuk@intel.com>
Signed-off-by: Igor Davidyuk <igor.davidyuk@intel.com>
Signed-off-by: Igor Davidyuk <igor.davidyuk@intel.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, looks good to me! Feel free to merge when you're ready.
This PR introduces the
compare_predictions
method to the Benchmarker class to visually compare deployments predictions on a gridTODO: