-
-
Notifications
You must be signed in to change notification settings - Fork 30.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Execute file open/close in executors #96229
Comments
This is a bit surprising. It would be helpful if you can provide a script to reproduce the issue as we need more data about this. |
The evidence you give below doesn't really prove that.
This code is managing a subprocess. There's no evidence that closing the file descriptor takes any time (in the process doing the OS-level close) that would block an event loop (if it were an asyncio app, which it isn't). The sleep 0.1 here is to give the subprocess that's hanging off the file descriptor time to respond to the EOF it receives, etc. Of course if you were to call this user-level
That's a blocking
I'm not even sure what you're proposing here. Do you want the Unless you come with an example where closing a pipe file descriptor actually blocks I'm inclined to close this as "won't fix". |
@elupus Can you provide a reproducer for this without third-party libraries ? If not then we'll have to close this as there is no reproducer. |
I will try to get something to reproduce. However, part of the problem is third party libraries adhering to the API set here. But the point is really that close() per say IS file io, as such could block the loop. Also this occurs in destructor/cleanup code outside of control of the third party library. Its the core that closes the file down. So if closing take some time its hard to avoid blocking the loop, while also making sure close is actually finished. I can see solutions to schedule the close in an executor, but then you are not sure its fully closed, and re-opening same file would lead to the file being busy. |
I will see if i can reproduce it with lingering sockets, since since a socket with linger on should block on close according to: https://pubs.opengroup.org/onlinepubs/9699919799/functions/close.html |
I was unable to reproduce it using lingering sockets. Seems the close completes directly. I find it a bit strange, but maybe the sockets are non blocking and the close call internally fails with EWOULDBLOCK. Anyway, not a usable way to repro. The ptyprocess issue stems from it's use in the async path of pexpect. So yes pexpect should probably be fixed to avoid calling ptyprocess close in event loop, which solve the blocking of the loop, however.. If the closing is pushed to executor, and the event loop is not awaiting this result. Any user code that does something that results in:
Would fail since something.close() have not actually completely closed the That is why i proposed to have the _call_connection_lost be converted to an async function, here and running the close() functions in an executor. That said, hese things can be worked around in userspace, and executors add overhead. So we can close this issue if you guys don't think it make sense for such a change. |
Thanks for trying. For backwards compatibility we can't really change So yeah, let's close this as "won't fix". |
Feature or enhancement
Execute the open/close:ing of unix file descriptors/protocols in executors.
Pitch
A unix file descriptor close() call can perform io/flush data. As such it may time a little bit of time. To add to that some protocols explicitly need sleeps after closing to make sure it's done.
Here is an location where explicit sleeps are added, and are hard to avoid.
https://github.com/pexpect/ptyprocess/blob/master/ptyprocess/ptyprocess.py#L403
A similar fix was done for pyserial-asyncio's protocols here:
pyserial/pyserial-asyncio#82
The culprit protocol in this case is that pipe transport handlers:
cpython/Lib/asyncio/unix_events.py
Line 583 in 420f39f
But they could all benefit from making the closing function execute in an executor.
Previous discussion
The text was updated successfully, but these errors were encountered: