-
Notifications
You must be signed in to change notification settings - Fork 301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nbconvert handler is affected by slow filesystems and blocks event loop #490
Comments
Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! 🤗 |
The responsible handler is With this JupyterLab loads in seconds. exporters = await anyio.run_sync_in_worker_thread(base.get_export_names)
...
exporter_class = await anyio.run_sync_in_worker_thread(base.get_exporter, exporter_name) |
Hi @hMED22 - thank you for opening this issue. Your analysis looks sound. Regarding the use of Also, I'd recommend we switch the calls to |
Thanks for the heads up. I'm running on Furthermore I found out that file operations are made by exporters based on So would it be ok to use |
Good point about |
I'll open a PR using |
Yes I'll handle this, I didn't notice 492 was merged before. |
Description
Jupyterlab makes a request to
/api/nbconvert
while initializing but the handler of that endpoint could be too slow depending on the filesystem and it blocks the event loop which blocks other requests and delays Jupyterlab initialization for too long.To make things worst Jupyterlab makes that request twice.
Reproduce
sshfs [user@]hostname:[directory] mountpoint
On Devtools notice that all subsequent requests are blocked until we receive responses for both requests to nbconvert exporters endpoint (screenshot above).
Expected behavior
nbconvert requests not block Jupyterlab initialization.
Context
Troubleshoot Output
Command Line Output
Browser Output
The text was updated successfully, but these errors were encountered: