-
-
Notifications
You must be signed in to change notification settings - Fork 243
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixes multiple inclusions of the current frame in aggregate commands … #294
Conversation
Thank you @jmkerr ! Can you provide tests for this fix please? |
Thank you for Watson, I'm happy to help! I'm not exactly sure where to best handle the deduplication of the For now, this is a simple and fast fix. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe instead we should remove the "current" frame at the end of the report
function? It looks like a side effect.
It makes sense indeed. |
You're right. I have changed it to always initialize |
tests/test_watson.py
Outdated
_ = watson.report(arrow.now(), arrow.now(), current=True) | ||
watson.stop() | ||
|
||
assert watson.frames['id'].count('current') == 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like you're testing the implementation here (i.e. the current frame has been deleted after report() finishes), but maybe it would be even more useful to test the actual report data returned?
The question I'd like to get answered by the test is this: are we sure that the current frame is not being deleted too soon in the report() method?
The watson.py
implementation looks fine to me 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll try to explain my point further: I think you should solely rely on Watson
class methods to test your assumptions on the behavior of the Watson
class.
As I see it, by accessing the Frames
object and asserting on its properties, you're effectively testing an implementation detail of the Watson
class and adding complexity to the test itself.
What I would have done here is calling two or more times watson.report()
and verify that you're always getting the expected report results.
What do you think?
Thank you @davidag. Since this commit changes the implementation, it should include the tests you mentioned. The tests now assert whether the current frame is included in the report
|
tests/test_watson.py
Outdated
_ = watson.report(arrow.now(), arrow.now(), current=True) | ||
watson.stop() | ||
|
||
assert watson.frames['id'].count('current') == 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll try to explain my point further: I think you should solely rely on Watson
class methods to test your assumptions on the behavior of the Watson
class.
As I see it, by accessing the Frames
object and asserting on its properties, you're effectively testing an implementation detail of the Watson
class and adding complexity to the test itself.
What I would have done here is calling two or more times watson.report()
and verify that you're always getting the expected report results.
What do you think?
tests/test_watson.py
Outdated
) | ||
assert len(report['projects']) == 0 | ||
|
||
watson.stop() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Finally, something I missed in my initial review: this seems a great opportunity to split test_report()
in multiple tests with more meaningful names. For example, your new tests could be inside a method named test_report_current()
.
Thanks again @davidag. You're right about not explicitly testing for side effects, but only the methods return value. The only way to detect the error in a report is by verifying the reported time. I've updated the test (in a new function) to:
|
) | ||
assert len(report['projects']) == 1 | ||
assert report['projects'][0]['name'] == 'foo' | ||
assert report['projects'][0]['time'] == pytest.approx(3600, rel=1e-2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think you need to use pytest.approx()
as there are no floating-point numbers involved. In fact, asserting that time equals 3600
is equivalent to what you did, as we are working with a granularity of seconds (int
s). This would lead to a flaky test: if there was a change in seconds between building the Watson
object and calling the second watson.report()
, the assertion would fail.
A possible solution could be to assert that time is less than 3600 * 2
, which is enough to test that the time is not duplicated. Any other ideas are more than welcomed!
Finally, I would use one minute (60) instead of one hour (3600) in order to reduce the complexity of the test (unless there is a reason to use hours that I didn't notice).
It seems we're close to ending this PR 🎉
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
report['projects'][0]['time']
is (like timedelta.total_seconds()
) floating-point-valued with microsecond precision, and it usually takes a few milliseconds until the second report is generated.
The flakyness should be just as negligible at the current 36 seconds as it is at 3600 seconds tolerance in an automated test, but could be reduced by mocking arrow.now
and arrow.utcnow
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
report['projects'][0]['time'] is (like timedelta.total_seconds()) floating-point-valued with microsecond precision, and it usually takes a few milliseconds until the second report is generated.
I didn't realize that, sorry!
I supposed they were seconds, because that's what is printed in the terminal. Nevertheless, it's the utils.format_timedelta()
function that removes millisecond information: seconds = int(delta.total_seconds())
. 👍
The flakyness should be just as negligible at the current 36 seconds as it is at 3600 seconds tolerance in an automated test
You're completely right, there's no flakiness
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One more thing, could you add a new point to CHANGELOG.md
?
…293) - Inclusion of the current frame in report() does not persistently add it to the Frames container. - Added test
I've rebased and added a line to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks fine to me. What do you think @jmaupetit ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup. Will merge this, thank you all! 🙏
Possible solution for #293.