Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor mark evaluators #2773

Merged
merged 7 commits into from
Oct 9, 2017

Conversation

RonnyPfannschmidt
Copy link
Member

No description provided.

@The-Compiler
Copy link
Member

Wrong branch?

@RonnyPfannschmidt RonnyPfannschmidt changed the base branch from master to features September 13, 2017 10:39
@RonnyPfannschmidt
Copy link
Member Author

yup, thanks for the note

@RonnyPfannschmidt
Copy link
Member Author

this pr will contain breakage to insane mark behaviour,
there will still be unclear behaviour afterward

mark evaluator operating on merged marks is a total and utterly incorrect mess, dissolving it breaks stuff
and leaves questions open

this one is entirely @hpk42 's fault

@nicoddemus
Copy link
Member

mark evaluator operating on merged marks is a total and utterly incorrect mess, dissolving it breaks stuff
and leaves questions open

You mean that we have the "merged marks" where keywords contain all marks, but put together in a dict?

@@ -0,0 +1,2 @@
* remove the internal optional multi-typed attribute _evalskip
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo: replace

Also please use correct grammar and formatting so we show users a well formatted changelog. 😉

Also don't need to add a *, towncrier does it for us automatically.

Remove the internal multi-typed attribute ``Node._evalskip`` and replace it with the boolean ``Node._skipped_by_mark``.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@RonnyPfannschmidt I was thinking this whole mark refactoring deserves a blog post on pytest's blog, to explain the pain points of the current design, our attempts to fix it and how it might affect users. People will be much more understanding if they know the reasons of why their test suite broke.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The more I understand the underlying issues the more I think a blog post or article in the docs is needed. 😬

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changelog is still the same, what do you think about my suggestion above @RonnyPfannschmidt?

@@ -155,17 +155,17 @@ def getexplanation(self):
@hookimpl(tryfirst=True)
def pytest_runtest_setup(item):
# Check if skip or skipif are specified as pytest marks

item._skipped_by_mark = False
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we make this part of the public API? Often users want to know if a test has been skipped in some hook, and the only way to do it currently is using the awkward:

was_skipped = hasattr(item, "_evalskip") and item._evalskip.istrue()

We might as well make it item.skipped_by_mark.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While on that, it might also be useful to have the reason attached tn it. If so, perhaps we should use a frozen attrs instance which also contains the reason? This makes it possible to add more information to it in the future:

item.skip_outcome.skipped == True
item.skip_outcome.reason == "skipped for some reason"

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

im not going to introduce a new api in a refactoring/fixup attempt,
thats for a follow-up - please open a issue for better communicating outcomes

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand the sentiment, but by renaming it to _skipped_by_mark you will break code which currently looks at this attribute to see if a test was skipped (for example IDEs, see #1613). So IDE and plugin authors which use this to check if a test was skipped will now have to test for _evalskip and _skipped_by_mark to comply. And then when a new version with a public API comes around, we will have another way to check for that.

Just pointing out we will bring a lot of pain to those people. Perhaps we can brainstorm a little on how the "skip" plugin decorates items with skip information?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

its a private attribute and stays a private attribute
so im not going to protect them too much from that use, thats what they should have CI for

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK then, I respect your decision

@RonnyPfannschmidt
Copy link
Member Author

the markinfo objects that merge the args ad kwargs of marks generate a lot of bugs, but also create a implied behaviour (its why marks from different levels overide each other in part)

return self.item.keywords.get(self.name)
self._marks = None
self._mark = None
self._repr_name = name
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps this should be self._mark_name? Seems clearer to me. "repr" is too close to Python's own __repr__ which might be confusing.


if self._marks:
self.result = False
# "holder" might be a MarkInfo or a MarkDecorator; only
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment is no longer relevant it seems.

# MarkInfo keeps track of all parameters it received in an
# _arglist attribute
for mark in self._marks:
self._mark = mark
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

self._mark will point to the last mark in self._marks, seems a little arbitrary to set this here. Is this by design? Otherwise it should be set in self._get_marks() instead to make it explicit that this is intended and doesn't depend on _istrue being called.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it points to the mark that had the first true expression, this mess comes from merged mark and remembering epxressions, then using attributes from merged mrks

so if you had 2 marks with different expressions, and one had run=False, all would have run=False implied

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ouch


def get(self, attr, default=None):
return self.holder.kwargs.get(attr, default)
if self._mark is None:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this loop over all marks trying to get the attr (which is really a kwarg actually), returning default if no marks have it?

Can we rename it to find_kwarg instead for clarity btw? To actually understand what this does I had to add an assert 0 and run the tests to figure out where it is called (because you can't actually do a search for "get").

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To exemplify:

    def find_kwarg(self, kwarg, default=None):
        for mark in self._get_marks():
            if kwarg in mark.kwargs:
                return mark.kwargs[kwarg]
        return default

With this we can even get rid of self._mark.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if one mark has run=True, and another one doesnt, we get wrong results, welcome to why marks are so horrific

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see thanks for the explanation.

In that case, I suggest we change this to return all the kwarg values, and then the caller has responsibility to decide what to do with them.

    def get_kwarg_values(self, kwarg, default=None):
        return [mark.kwargs[kwarg] mark in self._get_marks() if kwarg in mark.kwargs]

I don't like how implicit this is. If you don't like this suggestion, at least I would rename self._mark to something clear albeit ugly like self._first_true_mark or something.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well, it started out as something much worse, i#ll try to whip it in shape more

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And about this suggestion @RonnyPfannschmidt?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nicoddemus that breaks current usage inside pytest at least

so in order to go there i'd prefer to keep that until we iterate it again

@@ -155,17 +155,17 @@ def getexplanation(self):
@hookimpl(tryfirst=True)
def pytest_runtest_setup(item):
# Check if skip or skipif are specified as pytest marks

item._skipped_by_mark = False
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand the sentiment, but by renaming it to _skipped_by_mark you will break code which currently looks at this attribute to see if a test was skipped (for example IDEs, see #1613). So IDE and plugin authors which use this to check if a test was skipped will now have to test for _evalskip and _skipped_by_mark to comply. And then when a new version with a public API comes around, we will have another way to check for that.

Just pointing out we will bring a lot of pain to those people. Perhaps we can brainstorm a little on how the "skip" plugin decorates items with skip information?

@@ -0,0 +1,2 @@
* remove the internal optional multi-typed attribute _evalskip
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@RonnyPfannschmidt I was thinking this whole mark refactoring deserves a blog post on pytest's blog, to explain the pain points of the current design, our attempts to fix it and how it might affect users. People will be much more understanding if they know the reasons of why their test suite broke.

@RonnyPfannschmidt
Copy link
Member Author

note that when this one is finished it should break test suites, but it will fix a few bugs

@RonnyPfannschmidt RonnyPfannschmidt changed the title [wip] refactor mark evaluators refactor mark evaluators Sep 15, 2017
@nicoddemus
Copy link
Member

@RonnyPfannschmidt argh sorry, I spoke with you that I made a few comments, but those were "Pending" here on GH and I realized my mistake just now. Well just published them, please take a look.

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.001%) to 92.501% when pulling 459cc40 on RonnyPfannschmidt:fix-markeval-2767 into e7a4d3d on pytest-dev:features.

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.001%) to 92.501% when pulling 8a6bdb2 on RonnyPfannschmidt:fix-markeval-2767 into e7a4d3d on pytest-dev:features.

@nicoddemus nicoddemus merged commit 059455b into pytest-dev:features Oct 9, 2017
@RonnyPfannschmidt RonnyPfannschmidt deleted the fix-markeval-2767 branch November 13, 2017 10:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants