Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clean up synchronous managers #755

Open
blink1073 opened this issue Mar 14, 2022 · 7 comments
Open

Clean up synchronous managers #755

blink1073 opened this issue Mar 14, 2022 · 7 comments

Comments

@blink1073
Copy link
Contributor

Continued discussion from #751.

We also can't rely asyncio_run in the future, since it relies on the deprecated global get_event_loop and set_event_loop.

Options for moving forward:

  • Use a single private ioloop instance per manager
    • The base classes will be asynchronous and call the the private async versions of functions directly as needed. We will only use futures and tasks from within async methods.
    • The synchronous classes will work by wrapping the asynchronous public methods using a decorator that runs a new private asyncio loop as return asyncio.new_event_loop().run_until_complete(async_method).
    • We would offer a function that wraps the async class to become synchronous, and use this ourselves
    • We would have a section like:
 # These methods are meant to be overridden by subclasses
async def _start_kernel(...)
  • Run 'asyncio-for-sync' in a dedicated IO thread. call_soon_threadsafe and concurrent.futures.Future make this pretty doable, and in my experience simpler and more robust than using nest_asyncio. ipyparallel's sync client has worked that way for a long time. (per @minrk)
@davidbrochart
Copy link
Member

  • The synchronous classes will work by wrapping the asynchronous public methods using a decorator that runs a new private asyncio loop as return asyncio.new_event_loop().run_until_complete(async_method).

We may already be running inside an event loop, so this might not be possible, e.g.:

import asyncio

loop2 = asyncio.new_event_loop()

async def main2():
    await asyncio.sleep(1)
    print("main2 done")

async def main1():
    loop2.run_until_complete(main2())
    print("main1 done")

asyncio.run(main1())
Traceback (most recent call last):
  File "/home/david/github/davidbrochart/jupyterlab/foo.py", line 13, in <module>
    asyncio.run(main1())
  File "/home/david/mambaforge/envs/jupyterlab-dev/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/home/david/mambaforge/envs/jupyterlab-dev/lib/python3.10/asyncio/base_events.py", line 641, in run_until_complete
    return future.result()
  File "/home/david/github/davidbrochart/jupyterlab/foo.py", line 10, in main1
    loop2.run_until_complete(main2())
  File "/home/david/mambaforge/envs/jupyterlab-dev/lib/python3.10/asyncio/base_events.py", line 617, in run_until_complete
    self._check_running()
  File "/home/david/mambaforge/envs/jupyterlab-dev/lib/python3.10/asyncio/base_events.py", line 579, in _check_running
    raise RuntimeError(
RuntimeError: Cannot run the event loop while another loop is running
sys:1: RuntimeWarning: coroutine 'main2' was never awaited

@davidbrochart
Copy link
Member

  • Run 'asyncio-for-sync' in a dedicated IO thread. call_soon_threadsafe and concurrent.futures.Future make this pretty doable, and in my experience simpler and more robust than using nest_asyncio. ipyparallel's sync client has worked that way for a long time. (per @minrk)

Since @minrk has experience using this approach and it has been working for a long time in ipyparallel, I think it is a better solution.

@blink1073
Copy link
Contributor Author

I am actually working on something similar for work, which we should be able to use here. Here is a prototype of wrapping an asynchronous class and using a background thread to run.

You'd still need to have the public methods call out to private async methods to make sure you don't end up crossing boundaries between sync and async.

@davidbrochart
Copy link
Member

Nice! It looks like it's creating a new thread for each coroutine though. It would be nice to reuse threads from a pool, for performance reasons. (Maybe you had that in mind already).

@blink1073
Copy link
Contributor Author

There is only one thread and one io loop created, since it uses a singleton class

@davidbrochart
Copy link
Member

Ah my bad 👍

@blink1073
Copy link
Contributor Author

There is a similar functionality in django/asgiref called AsyncToSync, pointed out on discuss.python.org

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants