Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tests which fail in multiprocessing contexts #1018

Merged
merged 25 commits into from
Jul 8, 2019
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
84108d8
Parallel Demonstrator Tests
alexrudy May 8, 2019
be4ee65
Mark tests as XFail
alexrudy May 8, 2019
b13601f
Parallel Demonstrator Tests
alexrudy May 8, 2019
f4752c0
Mark tests as XFail
alexrudy May 8, 2019
a08d359
Applied PR feedback
MSeal Jun 12, 2019
6a0f8c8
Merge remote-tracking branch 'rudy/fix-zmq-context-global' into fix-zmq
MSeal Jun 12, 2019
55bb0db
Removed duplicate tests from merge
MSeal Jun 12, 2019
cc0ac44
Merge branch 'master' of github.com:jupyter/nbconvert into fix-zmq
MSeal Jun 12, 2019
5c0848f
Added additional timeout delay to test
MSeal Jun 12, 2019
b88f3e9
Set additional timeout on the correct test field
MSeal Jun 12, 2019
a7a92b1
Added ability to turn off slow tests
MSeal Jun 12, 2019
ae011e7
Attempt to get travis passing slow tests
MSeal Jun 12, 2019
a443d03
Fixing pytest conf issues with temp directories
MSeal Jun 12, 2019
d1a3a6d
Moved conftest into nbconvert path
MSeal Jun 12, 2019
4d3e992
Removed unecessary travis command
MSeal Jun 12, 2019
48beaa4
Another attempt to get Travis
MSeal Jun 12, 2019
baea777
Simplified the test execution
MSeal Jun 12, 2019
f81e1a2
Yet another travis fix attempt
MSeal Jun 12, 2019
1434a2a
Adding much higher timeouts to failing test
MSeal Jun 13, 2019
a818745
Attempt #billion to fix travis
MSeal Jun 13, 2019
bcd5157
Travis only-failure debug
MSeal Jun 13, 2019
f74f2e1
Added latest jupyter_client to travis for test run
MSeal Jun 13, 2019
c3ef4b7
Removed extra debug lines
MSeal Jun 13, 2019
9bbcecb
Resolve conflict with master
MSeal Jul 8, 2019
70d47b7
Removed jupyter_client master install from travis
MSeal Jul 8, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 50 additions & 0 deletions nbconvert/preprocessors/tests/files/Sleep One.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import time"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"time.sleep(0.01)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.0"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
68 changes: 68 additions & 0 deletions nbconvert/preprocessors/tests/test_execute.py
Original file line number Diff line number Diff line change
Expand Up @@ -309,6 +309,74 @@ def test_parallel_notebooks(capfd, tmpdir):
captured = capfd.readouterr()
assert captured.err == ""

@pytest.mark.xfail
def test_many_parallel_notebooks(capfd):
"""Ensure that when many IPython kernels are run in parallel, nothing awful happens.

Specifically, many IPython kernels when run simultaneously would enocunter errors
due to using the same SQLite history database.
"""

# I've put timeout=5, which is a bit aggressive, but if I destroy the ZMQ context
# below, timeout=5 works well enough.
opts = dict(kernel_name="python", timeout=5)
input_name = "HelloWorld.ipynb"
input_file = os.path.join(current_dir, "files", input_name)
res = PreprocessorTestsBase().build_resources()
res["metadata"]["path"] = os.path.join(current_dir, "files")

# run once, to trigger creating the original context
run_notebook(input_file, opts, res)

# Destroy the context - if you don't do this, the context
# will survive across the fork, and then fail to start properly.
import zmq
zmq.Context.instance().destroy()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI I tested this with the new jupyter_client. Could we add a TODO: delete when jupyter_client>=5.2.4 releases?


with mp.Pool(4) as pool:
pool.starmap(run_notebook, [(input_file, opts, res) for _ in range(8)])

captured = capfd.readouterr()
assert captured.err == ""

@pytest.mark.xfail
def test_parallel_fork_notebooks(capfd):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't get this test to pass on my ubuntu 18.04 machine. It will always hang if the thread is running when multi-processing launches.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried with and without the zeromq cleanup, and with / without the jupyter_client pending release changes. Haven't dug in deeper as to why this still hangs.

"""Ensure that when many IPython kernels are run in parallel, nothing awful happens.

Specifically, many IPython kernels when run simultaneously would enocunter errors
due to using the same SQLite history database.
"""

opts = dict(kernel_name="python", timeout=5)
input_name = "Sleep One.ipynb"
input_file = os.path.join(current_dir, "files", input_name)

fast_name = "HelloWorld.ipynb"
fast_file = os.path.join(current_dir, "files", fast_name)


res = PreprocessorTestsBase().build_resources()
res["metadata"]["path"] = os.path.join(current_dir, "files")

# run once, to trigger creating the original context
thread = threading.Thread(target=run_notebook, args=(input_file, opts, res))
thread.start()

try:
# Destroy the context - if you don't do this, the context
# will survive across the fork, and then fail to start properly.
# but if you do do this, then the context will get destroyed
# while the kernel is running in the current thread.
import zmq
zmq.Context.instance().destroy()

with mp.Pool(4) as pool:
pool.starmap(run_notebook, [(fast_file, opts, res) for _ in range(8)])
finally:
thread.join(timeout=1)

captured = capfd.readouterr()
assert captured.err == ""

class TestExecute(PreprocessorTestsBase):
"""Contains test functions for execute.py"""
Expand Down