Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Patch async enter all (addresses #242) #246

Merged
merged 13 commits into from
Oct 22, 2021

Conversation

overclockworked64
Copy link
Collaborator

Ready for review.

There was an issue with getting current actor's UID because at the time we try to get it, no actor is currently spawned (at least that's what the error said), so I left a random UID as a temporary workaround - this should probably be addressed. Another thing is that I thought that it's more appropriate for async_enter_all to take a sequence of managers. A couple of mypy errors were fixed as well. Finally, a test that demonstrates new capabilities was added.

Minor nitpick: we should enforce some kind of consistent neutral code formatting style across the entire codebase.

@goodboy
Copy link
Owner

goodboy commented Oct 17, 2021

@overclockworked64 thanks for these fixes!

Glad to see you got something working for wrath 😎

I left a random UID as a temporary workaround - this should probably be addressed.

Where does this issue stem from? Are we trying to do something before the runtime comes up a la open_root_actor()?

more appropriate for async_enter_all to take a sequence of managers

Yeah, fine by me. Sequence in, sequence out would be more ergonomic I'd think.

Finally, a test that demonstrates new capabilities was added.

🏄🏼

Minor nitpick: we should enforce some kind of consistent neutral code formatting style across the entire codebase.

Yeah, I've changed my position on style a few times since this code base was started - which is probably what you're seeing 😂.

Thoughts from you and lurkerz are (mostly) welcome.

@@ -2,6 +2,8 @@
Actor cluster helpers.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been wondering how to start exposing "high level" helpers like this. @guilledk and I had though about the names:

  • tractor.extras
  • tractor.builtin

@overclockworked64
Copy link
Collaborator Author

I left a random UID as a temporary workaround - this should probably be addressed.

Where does this issue stem from? Are we trying to do something before the runtime comes up a la open_root_actor()?

As is, this code gives a RuntimeError, I'm assuming probably because we're trying to do something with the current actor before actually opening a nursery and spawning some actors.

mngrs=[p.open_context(worker) for p in portals.values()],
teardown_trigger=teardown_trigger,
) as contexts,
async_enter_all(
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hehe i love this, but we still have 3.8 support atm.

I wonder though maybe we'll just drop since 3.10 is out?
I kinda like the idea of just rolling with latest 2 majors.

with trio.move_on_after(1):
for stream in itertools.cycle(streams):
await stream.send(MESSAGE)
teardown_trigger.set()
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@overclockworked64 so this teardown_trigger we need because of teardown errors yah?

I wonder do you see the same issues that originally justified this now that #245 has landed?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've tested on my use case and it seems that #245 resolves the problem.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🏄🏼


) -> AsyncGenerator[
list[str],
dict[str, tractor.Portal]
]:

portals: dict[str, tractor.Portal] = {}
uid = tractor.current_actor().uid
uid = str(__import__('random').randint(0, 2 ** 16))
# uid = tractor.current_actor().uid
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok right, so you can put this back as it was as long as you move this line under the ractor.open_nursery(start_method=start_method) as an: line.

The runtime has to be up (which is done implicitly inside the first opened nursery in the root actor) to avoid the error you were probably getting.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had to refactor this a bit, let me know what you think.

@@ -33,7 +38,7 @@ async def open_actor_cluster(
raise ValueError(
'Number of names is {len(names)} but count it {count}')

async with tractor.open_nursery() as an:
async with tractor.open_nursery(start_method=start_method) as an:
async with trio.open_nursery() as n:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So just stick the uid = line underneath here but before you do the zip loop and it should all work nicely.

mngr: AsyncContextManager[T],
to_yield: dict[int, T],
unwrapped: dict[int, T],
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

woa someone has the functional parlance going on 😎

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO that's what this does, it unwraps the thing so you can consume what's inside, so I thought the name was appropriate.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

name is great; trio uses the same one internally for wrapping coroutines.


to_yield = {}.fromkeys(id(mngr) for mngr in mngrs)
mngrs: Sequence[AsyncContextManager[T]],
teardown_trigger: trio.Event,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh interesting. I wonder if this answers my question from above?

If we don't use a trigger passed in by the user then we rely on the user code eventually relaying in a trio.Cancelled (or other error) from the wither. So is there a difference here between putting a teardown_trigger: trio.Event in this scope (and hiding it from the user) vs. letting them pass it in? For eg. if we do a try: / finally: around the yield and always do the teardown_trigger.set() in the finally do we need to let the user pass it in?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like #245 completely resolves the problem I encountered.

Copy link
Owner

@goodboy goodboy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@overclockworked64 super great work on this!

I have a couple questions mostly to do with whether we can make the teardown_trigger an implementation detail of async_enter_all(). Pretty sure it can be wrapped in a try:, finally block and we'll get the same result without the user needing to pass an event in.

The only other thing is that you can still use the uid = tractor.current_actor().uid stuff if you move the line below the nursery open.

Lmk what you think 🏄🏼

@overclockworked64
Copy link
Collaborator Author

One thought I had is that maybe we should think of a better name for async_enter_all. FWIW, it could be shortened to aenter_all, but we could also think of something to replace all with.

@goodboy
Copy link
Owner

goodboy commented Oct 22, 2021

FWIW, it could be shortened to aenter_all, but we could also think of something to replace all with.

I'm totally down for a better name. Maybe one that emphasizes that the managers are being concurrently entered via tasks?

Lurkerz unite!

Copy link
Owner

@goodboy goodboy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah this looks great to me.

@guilledk you mind giving this a peek?

I'll probably merge into the trionics dev branch before mid day.

@goodboy
Copy link
Owner

goodboy commented Oct 22, 2021

@overclockworked64 i think we can probably discuss re-naming in #241 since likely that branch is going to get a few more helpers stuck in it.

I think this is more then good enough to land on the dev branch.

@goodboy goodboy merged commit f4ba805 into goodboy:trionics Oct 22, 2021
@overclockworked64 overclockworked64 deleted the patch-async-enter-all branch October 22, 2021 20:46
@goodboy goodboy mentioned this pull request Oct 23, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants