-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add performance metrics in report.py #515
Conversation
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## master #515 +/- ##
==========================================
- Coverage 84.92% 84.91% -0.02%
==========================================
Files 200 200
Lines 26383 26401 +18
==========================================
+ Hits 22407 22419 +12
- Misses 3976 3982 +6 ☔ View full report in Codecov by Sentry. |
bbb4f39
to
c02846e
Compare
ae061f0
to
e4bc90f
Compare
…nytimepddl engines and used this time in the report to print the overhead %
efa2b57
to
09778a7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! I just left a very minor question.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code looks great and I just pinpointed two minor things.
However, I am wondering however whether the output is adapted as it is.
Keeping in mind, that report is primarily intended for engine developers to check their integration, the new Ok(<1s)
field in the output is neither very helpful for them nor very informative in general (it checks some pretty adhoc rules).
By default, I would suggest adding only the internal engine time (if available) which can be really helpful in identifying performance problem on the UP or engine side. Something like that maybe:
runtime_report = "{:.3f}s ({:.3f}s)".format(total_time, internal_time).ljust(30)
If the additional field is needed for the evaluation, its output can be activated by a command line option or environment variable.
Thanks @arbimo for the feedback! I agree with your comment and I added a command line option to print the info for the evaluation report, needed in the deliverable! |
No description provided.