Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/remove multi processing #510

Merged
merged 36 commits into from
Mar 29, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
f3567ec
remove metaclass
giulioungaretti Mar 5, 2017
ff7b879
first step in removing the metaclass
giulioungaretti Mar 5, 2017
0676d7a
fix: Remove background
giulioungaretti Mar 6, 2017
700e150
fix: remove alt-docs about monitor
giulioungaretti Mar 6, 2017
01e066e
remove signal queue
giulioungaretti Mar 6, 2017
5bf22e3
fix: More backgroudn removal
giulioungaretti Mar 6, 2017
f320b5d
remove data_manager stuff
giulioungaretti Mar 6, 2017
496783d
feature: Remove tests, yolo
giulioungaretti Mar 6, 2017
0bf8f21
feature: Rremove more mocks
giulioungaretti Mar 6, 2017
c599b8a
finally eradicate mp
giulioungaretti Mar 6, 2017
1ea05fc
remove Widgets
giulioungaretti Mar 7, 2017
3aada66
chore: Remove benchmarking and reloading
giulioungaretti Mar 7, 2017
ee5375f
chore: Add todo
giulioungaretti Mar 7, 2017
44138c5
chore: Use py.test
giulioungaretti Mar 7, 2017
480933e
fix: Remove nested attrs as unused
giulioungaretti Mar 7, 2017
783e0f9
Fix remove try/Interrupt, missing variable imax
giulioungaretti Mar 8, 2017
addd056
fix: Remove trailing number from test name
giulioungaretti Mar 10, 2017
c08ca65
fix: Use unit
giulioungaretti Mar 10, 2017
c7b4ac7
chore: Update documentation
giulioungaretti Mar 10, 2017
5f8fb7e
fix: Remove toymodel, and simlify examples
giulioungaretti Mar 10, 2017
ef2fe07
docs: Remove more MP from docs
giulioungaretti Mar 10, 2017
c07b613
chore: Remove untested example adaptive sweep
giulioungaretti Mar 16, 2017
49fb337
docs: Update docs
giulioungaretti Mar 16, 2017
e4544af
fix: remove mp from config
giulioungaretti Mar 16, 2017
cced6c7
fix: Remove server references
giulioungaretti Mar 16, 2017
145db4d
chore: Add todo
giulioungaretti Mar 16, 2017
cdc657f
fix: Remove server and metaclass
giulioungaretti Mar 16, 2017
f01050e
add stuff
giulioungaretti Mar 16, 2017
71295e8
fix: use right default for _instances
giulioungaretti Mar 16, 2017
dc6274b
bugs
giulioungaretti Mar 16, 2017
37c6fa1
feature: add more tests
giulioungaretti Mar 16, 2017
7bfb43b
Remove find_component
giulioungaretti Mar 16, 2017
a9b49a9
fix vim typo
giulioungaretti Mar 16, 2017
b099a83
fix: Remove array getter
giulioungaretti Mar 17, 2017
3e3ae2a
fix: remove timing
giulioungaretti Mar 17, 2017
59f33f7
feature: Allow for legacy code to run
giulioungaretti Mar 21, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
114 changes: 11 additions & 103 deletions CONTRIBUTING.rst
Original file line number Diff line number Diff line change
Expand Up @@ -59,13 +59,18 @@ Setup
Running Tests
~~~~~~~~~~~~~

The core test runner is in ``qcodes/test.py``:
We don't want to reinvent the wheel, and thus use py.test.
It's easy to install:

::

python qcodes/test.py
# optional extra verbosity and fail fast
python qcodes/test.py -v -f
pip install coverage pytest-cov pytest

Then to test and view the coverage:

::
py.test --cov=qcodes --cov-report xml --cov-config=.coveragerc


You can also run single tests with:

Expand All @@ -78,92 +83,6 @@ You can also run single tests with:
# or
python -m unittest qcodes.tests.test_metadata.TestMetadatable.test_snapshot

If you run the core test runner, you should see output that looks
something like this:

::

.........***** found one MockMock, testing *****
............................................Timing resolution:
startup time: 0.000e+00
min/med/avg/max dev: 9.260e-07, 9.670e-07, 1.158e-06, 2.109e-03
async sleep delays:
startup time: 2.069e-04
min/med/avg/max dev: 3.372e-04, 6.376e-04, 6.337e-04, 1.007e-03
multiprocessing startup delay and regular sleep delays:
startup time: 1.636e-02
min/med/avg/max dev: 3.063e-05, 2.300e-04, 2.232e-04, 1.743e-03
should go to stdout;should go to stderr;.stdout stderr stdout stderr ..[10:44:09.063 A Queue] should get printed
...................................
----------------------------------------------------------------------
Ran 91 tests in 4.192s

OK
Name Stmts Miss Cover Missing
----------------------------------------------------------
data/data_array.py 104 0 100%
data/data_set.py 179 140 22% 38-55, 79-94, 99-104, 123-135, 186-212, 215-221, 224-244, 251-254, 257-264, 272, 280-285, 300-333, 347-353, 360-384, 395-399, 405-407, 414-420, 426-427, 430, 433-438
data/format.py 225 190 16% 44-55, 61-62, 70, 78-97, 100, 114-148, 157-188, 232, 238, 246, 258-349, 352, 355-358, 361-368, 375-424, 427-441, 444, 447-451
data/io.py 76 50 34% 71-84, 90-91, 94, 97, 103, 109-110, 119-148, 154-161, 166, 169, 172, 175-179, 182, 185-186
data/manager.py 124 89 28% 15-20, 31, 34, 48-62, 65-67, 70, 76-77, 80-84, 90-102, 108-110, 117-121, 142-151, 154-182, 185, 188, 207-208, 215-221, 227-229, 237, 243, 249
instrument/base.py 74 0 100%
instrument/function.py 45 1 98% 77
instrument/ip.py 20 12 40% 10-16, 19-20, 24-25, 29-38
instrument/mock.py 63 0 100%
instrument/parameter.py 200 2 99% 467, 470
instrument/sweep_values.py 107 33 69% 196-207, 220-227, 238-252, 255-277
instrument/visa.py 36 24 33% 10-25, 28-32, 35-36, 40-41, 47-48, 57-58, 62-64, 68
loops.py 285 239 16% 65-74, 81-91, 120-122, 133-141, 153-165, 172-173, 188-207, 216-240, 243-313, 316-321, 324-350, 354-362, 371-375, 378-381, 414-454, 457-474, 477-484, 487-491, 510-534, 537-543, 559-561, 564, 577, 580, 590-608, 611-618, 627-628, 631
station.py 35 24 31% 17-32, 35, 45-50, 60, 67-82, 88
utils/helpers.py 95 0 100%
utils/metadata.py 13 0 100%
utils/multiprocessing.py 95 2 98% 125, 134
utils/sync_async.py 114 8 93% 166, 171-173, 176, 180, 184, 189-191
utils/timing.py 72 0 100%
utils/validators.py 110 0 100%
----------------------------------------------------------
TOTAL 2072 814 61%

The key is ``OK`` in the middle (that means all the tests passed), and
the presence of the coverage report after it. If any tests fail, we do
not show a coverage report, and the end of the output will contain
tracebacks and messages about what failed, for example:

::

======================================================================
FAIL: test_sweep_steps_edge_case (tests.test_instrument.TestParameters)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/alex/qdev/Qcodes/qcodes/tests/test_instrument.py", line 360, in test_sweep_steps_edge_case
self.check_set_amplitude2('Off', log_count=1, history_count=2)
File "/Users/alex/qdev/Qcodes/qcodes/tests/test_instrument.py", line 345, in check_set_amplitude2
self.assertTrue(line.startswith('negative delay'), line)
AssertionError: False is not true : cannot sweep amplitude2 from 0.1 to Off - jumping.

----------------------------------------------------------------------
Ran 91 tests in 4.177s

FAILED (failures=1)

The coverage report is only useful if you have been adding new code, to
see whether your tests visit all of your code. Look at the file(s) you
have been working on, and ensure that the "missing" section does not
contain the line numbers of any of the blocks you have touched.
Currently the core still has a good deal of untested code - eventually
we will have all of this tested, but for now you can ignore all the rest
of the missing coverage.

You can also run these tests from inside python. The output is similar
except that a) you don't get coverage reporting, and b) one test has to
be skipped because it does not apply within a notebook, so the output
will end ``OK (skipped=1)``:

.. code:: python

import qcodes
qcodes.test_core() # optional verbosity = 1 (default) or 2

If the tests pass, you should be ready to start developing!

To tests actual instruments, first instantiate them in an interactive
Expand Down Expand Up @@ -314,23 +233,13 @@ and then unit testing should be run on pull-request, using CI. Maybe
simplify to a one command that says: if there's enough cover, and all
good or fail and where it fails.

- The standard test commands are listed above under
:ref:`runnningtests`. More notes on different test runners can
be found in :ref:`testing`.

- Core tests live in
`qcodes/tests <https://github.com/qdev-dk/Qcodes/tree/master/qcodes/tests>`__
and instrument tests live in the same directories as the instrument
drivers.

- We should have a *few* high-level "integration" tests, but simple
unit tests (that just depend on code in one module) are more valuable
for several reasons:
- If complex tests fail it's more difficult to tell why
- When features change it is likely that more tests will need to change
- Unit tests can cover many scenarios much faster than integration
tests.

- If you're having difficulty making unit tests, first consider whether
your code could be restructured to make it less dependent on other
modules. Often, however, extra techniques are needed to break down a
Expand All @@ -339,9 +248,8 @@ good or fail and where it fails.
- Patching, one of the most useful parts of the
`unittest.mock <https://docs.python.org/3/library/unittest.mock.html>`__
library. This lets you specify exactly how other functions/objects
should behave when they're called by the code you are testing. For a
simple example, see
`test\_multiprocessing.py <https://github.com/qdev-dk/Qcodes/blob/58a8692bed55272f4c5865d6ec37f846154ead16/qcodes/tests/test_multiprocessing.py#L63-L65>`__
should behave when they're called by the code you are testing.

- Supporting files / data: Lets say you have a test of data acquisition
and analysis. You can break that up into an acquisition test and an
analysis by saving the intermediate state, namely the data file, in
Expand Down
4 changes: 3 additions & 1 deletion README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -70,9 +70,11 @@ $QCODES_INSTALL_DIR is the folder where you want to have the source code.
cd $QCODES_INSTALL_DIR
pyenv install 3.5.2
pyenv virtualenv 3.5.2 qcodes-dev
pyenv activate qcodes-dev
pip install -r requirements.txt
pip install coverage pytest-cov pytest --upgrade
pip install -e .
python qcodes/test.py -f
py.test --cov=qcodes --cov-config=.coveragerc

If the tests pass you are ready to hack!
This is the reference setup one needs to have to contribute, otherwise
Expand Down
156 changes: 0 additions & 156 deletions benchmarking/mptest.py

This file was deleted.

Loading