Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pytest plugin #27

Closed
njsmith opened this issue Jan 22, 2017 · 21 comments
Closed

pytest plugin #27

njsmith opened this issue Jan 22, 2017 · 21 comments

Comments

@njsmith
Copy link
Member

njsmith commented Jan 22, 2017

Extract what we've got into a proper standalone plugin

(Or alternatively make the regular trio package include an entry point? But probably it is better to decouple them a bit, in case we ever have to make an emergency release due to pytest breaking something -> don't want to force people to upgrade trio itself for that.)

@njsmith
Copy link
Member Author

njsmith commented Feb 20, 2017

Another nice thing would be if this plugin could somehow teach pytest about MultiErrors. (It's not unusable right now, because pytest AssertionErrors do contain the crucial information in their repr, and MultiError shows the repr of all the embedded exceptions. But it would be better if the traceback pytest machinery could play along properly.)

I'm not sure if this is possible with pytest as it currently stands, though. Maybe something involving a custom collector with a custom repr_failure method?

@njsmith
Copy link
Member Author

njsmith commented Mar 10, 2017

(Once ths is done then our tests could use some cleanup to take advantage... they're a bit gross right now. There's some duplication between our different conftest.py's, and they're packaged with the source, but can't actually be run by pytest --pyargs trio, ...)

@njsmith
Copy link
Member Author

njsmith commented Apr 21, 2017

Another idea: async fixtures, for when you need to await or otherwise access the trio context during fixture setup/teardown

this mighhhht be really hard to implement, not sure

@njsmith
Copy link
Member Author

njsmith commented Jul 4, 2017

Possibly useful link: https://github.com/pytest-dev/cookiecutter-pytest-plugin

@njsmith
Copy link
Member Author

njsmith commented Jul 4, 2017

pytest-asyncio has async fixture support: https://github.com/pytest-dev/pytest-asyncio/blob/master/pytest_asyncio/plugin.py

I guess the general trick would be to stash these fixtures on the side somewhere, and then unpack them when entering the test itself.

Maybe convert an async fixture into a sync fixture that yields an object of a special type that the trio runner recognizes and can call some methods on.

The problem would be nested fixtures. I guess this might be solved by having the fixture objects memoize themselves internally and use the same argument pre-processing code as the tests themselves?

@njsmith
Copy link
Member Author

njsmith commented Jul 4, 2017

Here's the pytest docs with all the wacky things fixtures can do: https://docs.pytest.org/en/latest/fixture.html

Module and session scoped async fixtures don't make any sense, since we tear down the run loop on every test. Maybe the simplest thing is to skip the sneaky hook methods and define our own pytest_trio.async_fixture decorator.

Not sure what to do about addfinalizer, maybe it just doesn't support async finalizers and that's ok.

Everything else looks like it should work ok with the approach above.

Probably this whole discussion of fixtures should be its own issue, because this certainly isn't a prerequisite to having a useful pytest-trio. But pytest-trio doesn't have an issue tracker yet :-)

@Tinche
Copy link

Tinche commented Jul 26, 2017

Not sure what exactly you have in mind for this, but I could just port pytest-asyncio to trio for you. That would probably be easy and it'd get you something, at least.

@njsmith
Copy link
Member Author

njsmith commented Jul 26, 2017

@Tinche: Oh, hello! Thanks for the offer! If you want to work on this then it'd be much appreciated for sure. Not having a good pytest plugin is pretty embarrassing for a project that prides itself on testing :-).

I'm not sure how much of the details of pytest-asyncio will carry over, because trio doesn't have the concept of an explicit loop object. OTOH trio does consider testing to be a first-class consideration, so it provides a @trio.testing.trio_test decorator out-of-the-box that's suitable to use with any testing framework (currently a bit rudimentary but it'd be nice to improve it further).

I was imagining that a v0.0.1 version of pytest-trio would basically just be this code pulled out into a standalone plugin, plus some configuration options added. (Probably: require that either a pytest.ini config variable be set to enable it globally, or else a pytest.mark.trio be applied, the latter intended for projects that might have some tests that use trio, some that use asyncio, etc.)

That would already be enough to let trio start dogfooding the package ourselves, and also be super helpful to others...

Then fancier things would be fixture support -- this is a bit tricky without explicit loop support, because you have to delay the actual fixture initialization until you start executing the test, but I think it's doable -- timeouts, helpers to detect things like ResourceWarning and "coroutine was never awaited", and so forth...... but we can figure that out later :-). What do you think?

@Tinche
Copy link

Tinche commented Jul 27, 2017

Hm, I figure the absence of an event loop would just simplify things. When we want to run a collected test coroutine (i.e. async def, starts with "test_", either marked with pytest.mark.trio or with the global setting to opt-out of the marker), just curio.run() it. Same with fixtures, although they too will require markers (and in pytest-asyncio too I think, to avoid conflicts).

Unsure how to handle the mock clocks. Will brood on it. Maybe digging in the pytest request context and looking for a subclass of clock is the easiest way, like in trio_test.

@njsmith
Copy link
Member Author

njsmith commented Jul 27, 2017

@Tinche: the problem with fixtures is that in general you need to run them under the same call to run as the main test -- trio still has per-run state, it's just not materialized as an explicit loop object. Think of it like, each time you call run you get a new loop, and you want to use the same loop for fixtures and the test. Mostly not having loop objects makes things simpler, but this is the rare case where they actually would be kinda handy...

Unsure how to handle the mock clocks. Will brood on it. Maybe digging in the pytest request context and looking for a subclass of clock is the easiest way, like in trio_test.

My preference would be to share code between trio_test and pytest-trio as much as possible? This isn't asyncio -- we can change trio to make pytest-trio's job easier :-).

@Tinche
Copy link

Tinche commented Jul 27, 2017

Hm, why would we need to run the fixtures and the test under the same call? Honestly asking.

Now that I think about it, my fixtures generally mock stuff or change the state of a database mock. I could run my fixtures under separate event loops really.

Maybe a few sample use cases would be good for further discussion.

@njsmith
Copy link
Member Author

njsmith commented Jul 27, 2017

In general, I'm reluctant to make any guarantees about objects created under one call to trio.run being usable under a different call. I think that right now it technically is possible to move a trio.socket.socket or trio.Lock from one "loop" to another, but it's not guaranteed and it would make me nervous if people started relying on that.

More concrete examples:

  • Suppose that a fixture sets up a background task running an HTTP server for the main test to issue requests against. Obviously this background task needs to run on the same loop as the main test.

  • I've just been working on the infrastructure to let you drop in a fake virtualized network for reliable testing of edge cases (Mock network for testing #170). This works by letting you replace trio's regular socket objects with a custom class within a particular run context. A natural way to use this would be as a fake_network fixture. But obviously this fixture then needs to be set up in the same run as the test and all its fixtures...

So yeah, there are definitely cases where running fixtures under their own trio.run calls are fine, but then there are these other cases where it's maybe fine maybe not, and cases where it just doesn't work at all, and if possible I'd like to avoid forcing users to be aware of these details. Putting everything under the same call to run is definitely harder to implement, maybe impossible, but if we can do it then the UX is way better.

@Tinche
Copy link

Tinche commented Aug 2, 2017

Just FYI I think running a background test server is a good example of an async fixture, and let's focus on making that happen. This of course means we can only support test-scoped async fixtures.

I'm leaving on a trip soon so I can't do any work on this until at least the last week of August.

@njsmith
Copy link
Member Author

njsmith commented Aug 4, 2017

Just FYI I think running a background test server is a good example of an async fixture, and let's focus on making that happen.

Agreed.

This of course means we can only support test-scoped async fixtures.

Yeah, this is kind of a fundamental limitation of how trio and pytest's architectures interact. Hopefully this is mostly OK. I think it should be? Test-scoped fixtures already cover a lot of the cases you'd want to cover (I don't think I've ever used anything else, though I understand there are use cases where the loss of test isolation is worth it), and you can still use non-async fixtures (including fixtures that use a call to trio.run internally, it's just that you can't get the fixture code and the test code to use the same call to trio.run with multi-test fixtures).

I'm leaving on a trip soon so I can't do any work on this until at least the last week of August.

Cool, have a great trip! Either something will happen on this before you get back or it won't, and either way I'm sure there will be plenty more to think about :-)

@smurfix
Copy link
Contributor

smurfix commented Oct 25, 2017

I'm playing with an "async fixture" idea for my trio port of aioamqp. The connection is an async context that's registered as a standard pytest fixture.

Thus I have

class AmqpConnection:
    async def setup_connection(self, …):
        …
        return self
    async def __aenter__(self):
        …
        return self
    async def do_something(self):
        …
    async def aexit(self, …):
        …

in my regular code.

The fixture function creates the connection object and returns a coroutine for connecting:

@pytest.fixture
def amqp():
    conn = AmqpConnection()
    return conn.setup_connection(…)

The result is used quite naturally in tests, like

class TestWhatever:
    async def test_foo(self, amqp):
        await amqp.do_something()

The magic which connects all of this happens in conftest.py:

class Runner:
	def __init__(self, proc):
		self.proc = proc
	def __call__(self, *a,**k):
		return trio.run(self.resolve,a,k)
	async def resolve(self,a,k):
		amqp = k.get('amqp',None)
		if amqp is not None:
			amqp = await amqp
			async with amqp as conn:
				k['amqp'] = conn
			        return await self.proc(*a,**k)
		else:
			return await self.proc(*a,**k)

@pytest.hookimpl(tryfirst=True)
def pytest_pyfunc_call(pyfuncitem):
	if inspect.iscoroutinefunction(pyfuncitem.obj):
		pyfuncitem.obj = Runner(pyfuncitem.obj)

Of course in a real implementation you'd check the arguments with iscoroutine() instead of by name, check whether the result of the await is an async context manager instead of blindly asssuming that, and stack any async context managers you thus find … but you get the idea.

This idea doesn't support async fixtures as arguments to other fixtures – that would require some major pytest surgery, I'm afraid.

@touilleMan
Copy link
Member

touilleMan commented Nov 15, 2017

Hi,

I've been working on a pytest-trio plugin: https://gist.github.com/touilleMan/9402d2b7f93506ff370675265f6af442

Few points about it:

  1. It is inspired by the pytest-asyncio module and trio.testing.trio_test
  2. an async test function marked pytest.mark.trio is will be run in a trio.run context
  3. the async fixtures are also resolved within the trio.run context. This is done recursively, so you can have a fixture (async or sync) which depends of an async fixture \o/
  4. As discussed here, the async fixture only works with the function scoped ones. If a module or class scoped fixture is declared async, a coroutine is provided as fixture value :'(
  5. I stole the unused_tcp_port and unused_tcp_port_factory fixtures from pytest-asyncio given they are really useful. However this means those fixtures are defined two times if both pytest modules are enabled at the same time.
  6. speaking of pytest-asyncio module, it considers any async fixture to be run with asyncio. So if it pytest pytest_fixture_setup is run before pytest-trio one (not sure how pytest decide which module execute first, but that's what happened during my tests), this cause a crash. So right now pytest-asyncio is not compatible with this pytest-trio
  7. I've added a basic exception handler to take care of the trio.MultiError and print them properly. However this need to be improved (the exception is returned as a string to pytest, which then cannot add color to it).

I'm thinking a solution to points 4 and 6 could be to obliged the user to decorate a trio fixture:

@triofixture
@pytest.fixture
async def my_trio_fixture():
   return await whatever()

This way the triofixture decorator could make sure the fixture is a function scope (an raise an error with explanation if not), and return a synchronous function (or a TrioFixture class instance) so pytest-asyncio wouldn't mistakenly use this fixture.

We could also patch the pytest.fixture function to handle those checks automatically, but I don't think pytest provides a hook for this so we would have to go with monkey patching which is better to avoid...

@njsmith I think you should register on pypi a pytest-trio placeholder project to make not be double-crossed by a rogue dev ;-)

@njsmith
Copy link
Member Author

njsmith commented Dec 8, 2017

@touilleMan Sorry for the slow follow-up here, I've been swamped at work and then was trying to focus on getting the long-delayed 0.2.0 release out.

This is super cool. I have some specific comments, but probably they're best handled in follow-ups... my suggestion for right now is we should take what you have and stick it in a python-trio/pytest-trio repo and start iterating on it :-). What do you think? I just sent you an invitation to join the python-trio organization, and you'll probably be interested in our new contributing doc, in case you haven't seen it yet.

@njsmith
Copy link
Member Author

njsmith commented Dec 8, 2017

@touilleMan I see that a repo has appeared! In case it's useful, I just created cookiecutter-trio, which is designed to help do initial bootstrapping of new projects like this.

@touilleMan
Copy link
Member

@njsmith Thanks a lot for adding me to the organisation !

I've created the pytest-trio repo, set up the continuous integration (travis, appveyor and coverall) and created a repo on pypi.

btw I've configured travis to automatically do the release on pypi when doing CI on a tag, I think it's a really convenient feature given it allows anyone with tag right to release new version of the project. Maybe you should consider this for the main trio project ?

@touilleMan
Copy link
Member

@njsmith indeed I used your cookiecutter template for the project ^^

It seems there is a trouble with intersphinx though: https://trio.readthedocs.io/objects.inv doesn't point on a valid resource

@njsmith
Copy link
Member Author

njsmith commented Dec 9, 2017

I guess we can close this now, and move discussions over to ✨the new repo✨

@njsmith njsmith closed this as completed Dec 9, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants