Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doctest for .rst files #1511

Merged
merged 58 commits into from
May 5, 2020
Merged
Show file tree
Hide file tree
Changes from 54 commits
Commits
Show all changes
58 commits
Select commit Hold shift + click to select a range
cff1a6c
add doctest to circleci
Apr 16, 2020
92027f8
Revert "add doctest to circleci"
Apr 16, 2020
f8a0f59
doctest docs rst files
Apr 16, 2020
ab75ac7
doctest only rst
Apr 16, 2020
99632c4
doctest debugging.rst
Apr 16, 2020
4bb89d1
Revert "doctest docs rst files"
Apr 16, 2020
15b7b34
Revert "Revert "add doctest to circleci""
Apr 16, 2020
9120b2e
doctest apex
Apr 16, 2020
f5724ee
doctest callbacks
Apr 16, 2020
83d54fa
doctest early stopping
Apr 16, 2020
4bf859d
doctest for child modules
Apr 16, 2020
7559e6e
doctest experiment reporting
Apr 16, 2020
d28be62
indentation
Apr 16, 2020
8fe18a5
doctest fast training
Apr 16, 2020
23c6bde
doctest for hyperparams
Apr 16, 2020
f71cc3a
doctests for lr_finder
Apr 16, 2020
368b887
doctests multi-gpu
Apr 16, 2020
c3c8d0e
more doctest
Apr 16, 2020
cd14b57
make doctest drone
Apr 16, 2020
5bf20c1
fix label build error
Apr 16, 2020
92f2d75
update fast training
awaelchli Apr 30, 2020
1be620d
update invalid imports
awaelchli Apr 30, 2020
9a5b46e
fix problem with int device count
awaelchli Apr 30, 2020
76dbe3d
rebase stuff
awaelchli Apr 30, 2020
10373d1
wip
awaelchli Apr 30, 2020
52ba055
wip
awaelchli Apr 30, 2020
ef79b34
wip
awaelchli Apr 30, 2020
8d49855
intro guide
awaelchli Apr 30, 2020
d5d999d
add missing code block
awaelchli Apr 30, 2020
f7574f4
circleci
awaelchli Apr 30, 2020
ede2a50
logger import for doctest
awaelchli Apr 30, 2020
31ef748
test if doctest runs on drone
awaelchli May 2, 2020
9a4753b
fix mnist download
awaelchli May 2, 2020
419ddf6
also run install deps for building docs
awaelchli May 2, 2020
e4b6d17
install cmake
awaelchli May 2, 2020
61e47ab
try sudo
awaelchli May 2, 2020
bfbcd55
hide output
awaelchli May 2, 2020
28b599f
try pip stuff
awaelchli May 2, 2020
2967f36
try to mock horovod
awaelchli May 2, 2020
8dac8e3
Tranfer -> Transfer
awaelchli May 2, 2020
4771308
add torchvision to extras
awaelchli May 2, 2020
e09ef0d
revert pip stuff
awaelchli May 2, 2020
38505af
mlflow file location
awaelchli May 2, 2020
6c0ea71
do not mock torch
awaelchli May 2, 2020
8ccd690
torchvision
awaelchli May 4, 2020
067f42c
drone extra req.
awaelchli May 4, 2020
c50ae2f
try higher sphinx version
awaelchli May 4, 2020
9a349b8
Revert "try higher sphinx version"
awaelchli May 4, 2020
358539d
try coverage command
awaelchli May 4, 2020
613ad49
try coverage command
awaelchli May 4, 2020
8f2f060
try undoc flag
awaelchli May 4, 2020
2e08e45
newline
awaelchli May 4, 2020
adfbb8e
undo drone
awaelchli May 4, 2020
c934c0b
report coverage
awaelchli May 4, 2020
a127722
review
awaelchli May 4, 2020
91ffda6
remove torchvision from extras
awaelchli May 4, 2020
c1576a9
skip tests only if torchvision not available
awaelchli May 4, 2020
7849c26
fix testoutput torchvision
awaelchli May 4, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -64,10 +64,13 @@ references:
name: Make Documentation
command: |
# sudo apt-get install pandoc
sudo apt-get update && sudo apt-get install -y cmake
pip install -r requirements.txt --user
sudo pip install -r docs/requirements.txt
pip install -r requirements-extra.txt --user # for doctesting loggers etc.
# sphinx-apidoc -o ./docs/source ./pytorch_lightning **/test_* --force --follow-links
cd docs; make clean ; make html --debug --jobs 2 SPHINXOPTS="-W"
cd docs; make clean; make html --debug --jobs 2 SPHINXOPTS="-W"
make doctest; make coverage

jobs:

Expand Down
2 changes: 2 additions & 0 deletions .drone.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,9 +35,11 @@ steps:
- apt-get update && apt-get install -y cmake
- pip install -r requirements.txt --user -q
- pip install -r ./tests/requirements-devel.txt --user -q
- pip install -r ./docs/requirements.txt --user -q
awaelchli marked this conversation as resolved.
Show resolved Hide resolved
- pip list
- python -c "import torch ; print(' & '.join([torch.cuda.get_device_name(i) for i in range(torch.cuda.device_count())]) if torch.cuda.is_available() else 'only CPU')"
- coverage run --source pytorch_lightning -m py.test pytorch_lightning tests benchmarks -v --doctest-modules # --flake8
#- cd docs; make doctest; make coverage
- coverage report
- codecov --token $CODECOV_TOKEN # --pr $DRONE_PULL_REQUEST --build $DRONE_BUILD_NUMBER --branch $DRONE_BRANCH --commit $DRONE_COMMIT --tag $DRONE_TAG
- python tests/collect_env_details.py
Expand Down
9 changes: 7 additions & 2 deletions docs/source/apex.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@
.. testsetup:: *

from pytorch_lightning.trainer.trainer import Trainer


16-bit training
=================
Lightning offers 16-bit training for CPUs, GPUs and TPUs.
Expand Down Expand Up @@ -38,7 +43,7 @@ Install apex
Enable 16-bit
^^^^^^^^^^^^^

.. code-block:: python
.. testcode::

# turn on 16-bit
trainer = Trainer(amp_level='O1', precision=16)
Expand All @@ -50,7 +55,7 @@ TPU 16-bit
----------
16-bit on TPus is much simpler. To use 16-bit with TPUs set precision to 16 when using the tpu flag

.. code-block:: python
.. testcode::

# DEFAULT
trainer = Trainer(num_tpu_cores=8, precision=32)
Expand Down
37 changes: 22 additions & 15 deletions docs/source/callbacks.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@
.. testsetup:: *

from pytorch_lightning.trainer.trainer import Trainer
from pytorch_lightning.callbacks.base import Callback

.. role:: hidden
:class: hidden-section

Expand All @@ -18,21 +23,23 @@ An overall Lightning system should have:

Example:

.. doctest::

>>> import pytorch_lightning as pl
>>> class MyPrintingCallback(pl.Callback):
...
... def on_init_start(self, trainer):
... print('Starting to init trainer!')
...
... def on_init_end(self, trainer):
... print('trainer is init now')
...
... def on_train_end(self, trainer, pl_module):
... print('do something when training ends')
...
>>> trainer = pl.Trainer(callbacks=[MyPrintingCallback()])
.. testcode::

class MyPrintingCallback(Callback):

def on_init_start(self, trainer):
print('Starting to init trainer!')

def on_init_end(self, trainer):
print('trainer is init now')

def on_train_end(self, trainer, pl_module):
print('do something when training ends')

trainer = Trainer(callbacks=[MyPrintingCallback()])

.. testoutput::

Starting to init trainer!
trainer is init now

Expand Down
35 changes: 29 additions & 6 deletions docs/source/child_modules.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,22 @@
.. testsetup:: *

import torch
from pytorch_lightning.trainer.trainer import Trainer
from pytorch_lightning.callbacks.base import Callback
from pytorch_lightning.core.lightning import LightningModule

class LitMNIST(LightningModule):

def __init__(self):
super().__init__()

def train_dataloader():
pass

def val_dataloader():
pass


Child Modules
-------------
Research projects tend to test different approaches to the same dataset.
Expand All @@ -7,13 +26,18 @@ For example, imagine we now want to train an Autoencoder to use as a feature ext
Recall that `LitMNIST` already defines all the dataloading etc... The only things
that change in the `Autoencoder` model are the init, forward, training, validation and test step.

.. code-block:: python
.. testcode::

class Encoder(torch.nn.Module):
...
pass

class Decoder(torch.nn.Module):
pass

class AutoEncoder(LitMNIST):

def __init__(self):
super().__init__()
self.encoder = Encoder()
self.decoder = Decoder()

Expand All @@ -30,10 +54,10 @@ that change in the `Autoencoder` model are the init, forward, training, validati
return loss

def validation_step(self, batch, batch_idx):
return self._shared_eval(batch, batch_idx, 'val'):
return self._shared_eval(batch, batch_idx, 'val')

def test_step(self, batch, batch_idx):
return self._shared_eval(batch, batch_idx, 'test'):
return self._shared_eval(batch, batch_idx, 'test')

def _shared_eval(self, batch, batch_idx, prefix):
x, y = batch
Expand All @@ -43,6 +67,7 @@ that change in the `Autoencoder` model are the init, forward, training, validati
loss = F.nll_loss(logits, y)
return {f'{prefix}_loss': loss}


and we can train this using the same trainer

.. code-block:: python
Expand All @@ -58,5 +83,3 @@ In this case, we want to use the `AutoEncoder` to extract image representations

some_images = torch.Tensor(32, 1, 28, 28)
representations = autoencoder(some_images)

..
21 changes: 11 additions & 10 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -307,7 +307,7 @@ def setup(app):
# https://stackoverflow.com/questions/15889621/sphinx-how-to-exclude-imports-in-automodule

MOCK_REQUIRE_PACKAGES = []
with open(os.path.join(PATH_ROOT, 'requirements.txt'), 'r') as fp:
with open(os.path.join(PATH_ROOT, 'requirements-extra.txt'), 'r') as fp:
awaelchli marked this conversation as resolved.
Show resolved Hide resolved
for ln in fp.readlines():
found = [ln.index(ch) for ch in list(',=<>#') if ch in ln]
pkg = ln[:min(found)] if found else ln
Expand All @@ -316,19 +316,10 @@ def setup(app):

# TODO: better parse from package since the import name and package name may differ
MOCK_MANUAL_PACKAGES = [
'torch',
'torchvision',
awaelchli marked this conversation as resolved.
Show resolved Hide resolved
'PIL',
'test_tube',
'mlflow',
'comet_ml',
'wandb',
'neptune',
'trains',
]
autodoc_mock_imports = MOCK_REQUIRE_PACKAGES + MOCK_MANUAL_PACKAGES
# for mod_name in MOCK_REQUIRE_PACKAGES:
# sys.modules[mod_name] = mock.Mock()


# Options for the linkcode extension
Expand Down Expand Up @@ -403,3 +394,13 @@ def find_source():
# Useful for avoiding ambiguity when the same section heading appears in different documents.
# http://www.sphinx-doc.org/en/master/usage/extensions/autosectionlabel.html
autosectionlabel_prefix_document = True

# only run doctests marked with a ".. doctest::" directive
doctest_test_doctest_blocks = ''
doctest_global_setup = """

import os
import torch

"""
coverage_skip_undoc_in_source = True
26 changes: 15 additions & 11 deletions docs/source/debugging.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
.. testsetup:: *

from pytorch_lightning.trainer.trainer import Trainer

Debugging
=========
The following are flags that make debugging much easier.
Expand All @@ -11,9 +15,9 @@ a full epoch to crash.
(See: :paramref:`~pytorch_lightning.trainer.trainer.Trainer.fast_dev_run`
argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)

.. code-block:: python
.. testcode::

trainer = pl.Trainer(fast_dev_run=True)
trainer = Trainer(fast_dev_run=True)

Inspect gradient norms
----------------------
Expand All @@ -22,10 +26,10 @@ Logs (to a logger), the norm of each weight matrix.
(See: :paramref:`~pytorch_lightning.trainer.trainer.Trainer.track_grad_norm`
argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)

.. code-block:: python
.. testcode::

# the 2-norm
trainer = pl.Trainer(track_grad_norm=2)
trainer = Trainer(track_grad_norm=2)

Log GPU usage
-------------
Expand All @@ -34,9 +38,9 @@ Logs (to a logger) the GPU usage for each GPU on the master machine.
(See: :paramref:`~pytorch_lightning.trainer.trainer.Trainer.log_gpu_memory`
argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)

.. code-block:: python
.. testcode::

trainer = pl.Trainer(log_gpu_memory=True)
trainer = Trainer(log_gpu_memory=True)

Make model overfit on subset of data
------------------------------------
Expand All @@ -47,9 +51,9 @@ and try to get your model to overfit. If it can't, it's a sign it won't work wit
(See: :paramref:`~pytorch_lightning.trainer.trainer.Trainer.overfit_pct`
argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)

.. code-block:: python
.. testcode::

trainer = pl.Trainer(overfit_pct=0.01)
trainer = Trainer(overfit_pct=0.01)

Print the parameter count by layer
----------------------------------
Expand All @@ -59,9 +63,9 @@ To disable this behavior, turn off this flag:
(See: :paramref:`~pytorch_lightning.trainer.trainer.Trainer.weights_summary`
argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)

.. code-block:: python
.. testcode::

trainer = pl.Trainer(weights_summary=None)
trainer = Trainer(weights_summary=None)


Set the number of validation sanity steps
Expand All @@ -72,7 +76,7 @@ This avoids crashing in the validation loop sometime deep into a lengthy trainin
(See: :paramref:`~pytorch_lightning.trainer.trainer.Trainer.num_sanity_val_steps`
argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)

.. code-block:: python
.. testcode::

# DEFAULT
trainer = Trainer(num_sanity_val_steps=5)
44 changes: 27 additions & 17 deletions docs/source/early_stopping.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
.. testsetup:: *

from pytorch_lightning.trainer.trainer import Trainer
from pytorch_lightning.callbacks.early_stopping import EarlyStopping


Early stopping
==============

Expand All @@ -17,23 +23,25 @@ Enable Early Stopping using Callbacks on epoch end
--------------------------------------------------
There are two ways to enable early stopping using callbacks on epoch end.

.. doctest::
- Set early_stop_callback to True. Will look for 'val_loss' in validation_epoch_end() return dict.
If it is not found an error is raised.

.. testcode::

trainer = Trainer(early_stop_callback=True)

- Or configure your own callback

>>> from pytorch_lightning import Trainer
>>> from pytorch_lightning.callbacks import EarlyStopping
.. testcode::

# A) Set early_stop_callback to True. Will look for 'val_loss'
# in validation_epoch_end() return dict. If it is not found an error is raised.
>>> trainer = Trainer(early_stop_callback=True)
# B) Or configure your own callback
>>> early_stop_callback = EarlyStopping(
... monitor='val_loss',
... min_delta=0.00,
... patience=3,
... verbose=False,
... mode='min'
... )
>>> trainer = Trainer(early_stop_callback=early_stop_callback)
early_stop_callback = EarlyStopping(
monitor='val_loss',
min_delta=0.00,
patience=3,
verbose=False,
mode='min'
)
trainer = Trainer(early_stop_callback=early_stop_callback)

In any case, the callback will fall back to the training metrics (returned in
:meth:`~pytorch_lightning.core.lightning.LightningModule.training_step`,
Expand All @@ -43,7 +51,8 @@ looking for a key to monitor if validation is disabled or
is not defined.

.. seealso::
:class:`~pytorch_lightning.trainer.trainer.Trainer`
- :class:`~pytorch_lightning.trainer.trainer.Trainer`
- :class:`~pytorch_lightning.callbacks.early_stopping.EarlyStopping`

Disable Early Stopping with callbacks on epoch end
--------------------------------------------------
Expand All @@ -53,4 +62,5 @@ Note that ``None`` will not disable early stopping but will lead to the
default behaviour.

.. seealso::
:class:`~pytorch_lightning.trainer.trainer.Trainer`
- :class:`~pytorch_lightning.trainer.trainer.Trainer`
- :class:`~pytorch_lightning.callbacks.early_stopping.EarlyStopping`
Loading