-
-
Notifications
You must be signed in to change notification settings - Fork 339
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pytest plugin #27
Comments
Another nice thing would be if this plugin could somehow teach pytest about I'm not sure if this is possible with pytest as it currently stands, though. Maybe something involving a custom collector with a custom |
(Once ths is done then our tests could use some cleanup to take advantage... they're a bit gross right now. There's some duplication between our different |
Another idea: async fixtures, for when you need to this mighhhht be really hard to implement, not sure |
Possibly useful link: https://github.com/pytest-dev/cookiecutter-pytest-plugin |
pytest-asyncio has async fixture support: https://github.com/pytest-dev/pytest-asyncio/blob/master/pytest_asyncio/plugin.py I guess the general trick would be to stash these fixtures on the side somewhere, and then unpack them when entering the test itself. Maybe convert an async fixture into a sync fixture that yields an object of a special type that the trio runner recognizes and can call some methods on. The problem would be nested fixtures. I guess this might be solved by having the fixture objects memoize themselves internally and use the same argument pre-processing code as the tests themselves? |
Here's the pytest docs with all the wacky things fixtures can do: https://docs.pytest.org/en/latest/fixture.html Module and session scoped async fixtures don't make any sense, since we tear down the run loop on every test. Maybe the simplest thing is to skip the sneaky hook methods and define our own Not sure what to do about Everything else looks like it should work ok with the approach above. Probably this whole discussion of fixtures should be its own issue, because this certainly isn't a prerequisite to having a useful pytest-trio. But pytest-trio doesn't have an issue tracker yet :-) |
Not sure what exactly you have in mind for this, but I could just port pytest-asyncio to trio for you. That would probably be easy and it'd get you something, at least. |
@Tinche: Oh, hello! Thanks for the offer! If you want to work on this then it'd be much appreciated for sure. Not having a good pytest plugin is pretty embarrassing for a project that prides itself on testing :-). I'm not sure how much of the details of pytest-asyncio will carry over, because trio doesn't have the concept of an explicit loop object. OTOH trio does consider testing to be a first-class consideration, so it provides a I was imagining that a v0.0.1 version of pytest-trio would basically just be this code pulled out into a standalone plugin, plus some configuration options added. (Probably: require that either a pytest.ini config variable be set to enable it globally, or else a That would already be enough to let trio start dogfooding the package ourselves, and also be super helpful to others... Then fancier things would be fixture support -- this is a bit tricky without explicit loop support, because you have to delay the actual fixture initialization until you start executing the test, but I think it's doable -- timeouts, helpers to detect things like ResourceWarning and "coroutine was never awaited", and so forth...... but we can figure that out later :-). What do you think? |
Hm, I figure the absence of an event loop would just simplify things. When we want to run a collected test coroutine (i.e. async def, starts with "test_", either marked with pytest.mark.trio or with the global setting to opt-out of the marker), just curio.run() it. Same with fixtures, although they too will require markers (and in pytest-asyncio too I think, to avoid conflicts). Unsure how to handle the mock clocks. Will brood on it. Maybe digging in the pytest request context and looking for a subclass of clock is the easiest way, like in trio_test. |
@Tinche: the problem with fixtures is that in general you need to run them under the same call to
My preference would be to share code between |
Hm, why would we need to run the fixtures and the test under the same call? Honestly asking. Now that I think about it, my fixtures generally mock stuff or change the state of a database mock. I could run my fixtures under separate event loops really. Maybe a few sample use cases would be good for further discussion. |
In general, I'm reluctant to make any guarantees about objects created under one call to More concrete examples:
So yeah, there are definitely cases where running fixtures under their own |
Just FYI I think running a background test server is a good example of an async fixture, and let's focus on making that happen. This of course means we can only support test-scoped async fixtures. I'm leaving on a trip soon so I can't do any work on this until at least the last week of August. |
Agreed.
Yeah, this is kind of a fundamental limitation of how trio and pytest's architectures interact. Hopefully this is mostly OK. I think it should be? Test-scoped fixtures already cover a lot of the cases you'd want to cover (I don't think I've ever used anything else, though I understand there are use cases where the loss of test isolation is worth it), and you can still use non-async fixtures (including fixtures that use a call to
Cool, have a great trip! Either something will happen on this before you get back or it won't, and either way I'm sure there will be plenty more to think about :-) |
I'm playing with an "async fixture" idea for my trio port of aioamqp. The connection is an async context that's registered as a standard pytest fixture. Thus I have
in my regular code. The fixture function creates the connection object and returns a coroutine for connecting:
The result is used quite naturally in tests, like
The magic which connects all of this happens in conftest.py:
Of course in a real implementation you'd check the arguments with iscoroutine() instead of by name, check whether the result of the This idea doesn't support async fixtures as arguments to other fixtures – that would require some major pytest surgery, I'm afraid. |
Hi, I've been working on a pytest-trio plugin: https://gist.github.com/touilleMan/9402d2b7f93506ff370675265f6af442 Few points about it:
I'm thinking a solution to points 4 and 6 could be to obliged the user to decorate a trio fixture: @triofixture
@pytest.fixture
async def my_trio_fixture():
return await whatever() This way the We could also patch the @njsmith I think you should register on pypi a |
@touilleMan Sorry for the slow follow-up here, I've been swamped at work and then was trying to focus on getting the long-delayed 0.2.0 release out. This is super cool. I have some specific comments, but probably they're best handled in follow-ups... my suggestion for right now is we should take what you have and stick it in a |
@touilleMan I see that a repo has appeared! In case it's useful, I just created cookiecutter-trio, which is designed to help do initial bootstrapping of new projects like this. |
@njsmith Thanks a lot for adding me to the organisation ! I've created the pytest-trio repo, set up the continuous integration (travis, appveyor and coverall) and created a repo on pypi. btw I've configured travis to automatically do the release on pypi when doing CI on a tag, I think it's a really convenient feature given it allows anyone with tag right to release new version of the project. Maybe you should consider this for the main trio project ? |
@njsmith indeed I used your cookiecutter template for the project ^^ It seems there is a trouble with intersphinx though: https://trio.readthedocs.io/objects.inv doesn't point on a valid resource |
I guess we can close this now, and move discussions over to ✨the new repo✨ |
Extract what we've got into a proper standalone plugin
(Or alternatively make the regular
trio
package include an entry point? But probably it is better to decouple them a bit, in case we ever have to make an emergency release due to pytest breaking something -> don't want to force people to upgrade trio itself for that.)The text was updated successfully, but these errors were encountered: