Skip to content

Commit

Permalink
doctest for .rst files (#1511)
Browse files Browse the repository at this point in the history
* add doctest to circleci

* Revert "add doctest to circleci"

This reverts commit c45b34e.

* Revert "Revert "add doctest to circleci""

This reverts commit 41fca97.

* doctest docs rst files

* Revert "doctest docs rst files"

This reverts commit b4a2e83.

* doctest only rst

* doctest debugging.rst

* doctest apex

* doctest callbacks

* doctest early stopping

* doctest for child modules

* doctest experiment reporting

* indentation

* doctest fast training

* doctest for hyperparams

* doctests for lr_finder

* doctests multi-gpu

* more doctest

* make doctest drone

* fix label build error

* update fast training

* update invalid imports

* fix problem with int device count

* rebase stuff

* wip

* wip

* wip

* intro guide

* add missing code block

* circleci

* logger import for doctest

* test if doctest runs on drone

* fix mnist download

* also run install deps for building docs

* install cmake

* try sudo

* hide output

* try pip stuff

* try to mock horovod

* Tranfer -> Transfer

* add torchvision to extras

* revert pip stuff

* mlflow file location

* do not mock torch

* torchvision

* drone extra req.

* try higher sphinx version

* Revert "try higher sphinx version"

This reverts commit 490ac28.

* try coverage command

* try coverage command

* try undoc flag

* newline

* undo drone

* report coverage

* review

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>

* remove torchvision from extras

* skip tests only if torchvision not available

* fix testoutput torchvision

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
  • Loading branch information
awaelchli and Borda authored May 5, 2020
1 parent 48e808c commit a6de1b8
Show file tree
Hide file tree
Showing 25 changed files with 810 additions and 649 deletions.
5 changes: 4 additions & 1 deletion .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -64,10 +64,13 @@ references:
name: Make Documentation
command: |
# sudo apt-get install pandoc
sudo apt-get update && sudo apt-get install -y cmake
pip install -r requirements.txt --user
sudo pip install -r docs/requirements.txt
pip install -r requirements-extra.txt --user # for doctesting loggers etc.
# sphinx-apidoc -o ./docs/source ./pytorch_lightning **/test_* --force --follow-links
cd docs; make clean ; make html --debug --jobs 2 SPHINXOPTS="-W"
cd docs; make clean; make html --debug --jobs 2 SPHINXOPTS="-W"
make doctest; make coverage
jobs:

Expand Down
2 changes: 2 additions & 0 deletions .drone.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,9 +35,11 @@ steps:
- apt-get update && apt-get install -y cmake
- pip install -r requirements.txt --user -q
- pip install -r ./tests/requirements-devel.txt --user -q
#- pip install -r ./docs/requirements.txt --user -q
- pip list
- python -c "import torch ; print(' & '.join([torch.cuda.get_device_name(i) for i in range(torch.cuda.device_count())]) if torch.cuda.is_available() else 'only CPU')"
- coverage run --source pytorch_lightning -m py.test pytorch_lightning tests benchmarks -v --doctest-modules # --flake8
#- cd docs; make doctest; make coverage
- coverage report
- codecov --token $CODECOV_TOKEN # --pr $DRONE_PULL_REQUEST --build $DRONE_BUILD_NUMBER --branch $DRONE_BRANCH --commit $DRONE_COMMIT --tag $DRONE_TAG
- python tests/collect_env_details.py
Expand Down
9 changes: 7 additions & 2 deletions docs/source/apex.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@
.. testsetup:: *

from pytorch_lightning.trainer.trainer import Trainer


16-bit training
=================
Lightning offers 16-bit training for CPUs, GPUs and TPUs.
Expand Down Expand Up @@ -38,7 +43,7 @@ Install apex
Enable 16-bit
^^^^^^^^^^^^^

.. code-block:: python
.. testcode::

# turn on 16-bit
trainer = Trainer(amp_level='O1', precision=16)
Expand All @@ -50,7 +55,7 @@ TPU 16-bit
----------
16-bit on TPus is much simpler. To use 16-bit with TPUs set precision to 16 when using the tpu flag

.. code-block:: python
.. testcode::

# DEFAULT
trainer = Trainer(num_tpu_cores=8, precision=32)
Expand Down
37 changes: 22 additions & 15 deletions docs/source/callbacks.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@
.. testsetup:: *

from pytorch_lightning.trainer.trainer import Trainer
from pytorch_lightning.callbacks.base import Callback

.. role:: hidden
:class: hidden-section

Expand All @@ -18,21 +23,23 @@ An overall Lightning system should have:

Example:

.. doctest::

>>> import pytorch_lightning as pl
>>> class MyPrintingCallback(pl.Callback):
...
... def on_init_start(self, trainer):
... print('Starting to init trainer!')
...
... def on_init_end(self, trainer):
... print('trainer is init now')
...
... def on_train_end(self, trainer, pl_module):
... print('do something when training ends')
...
>>> trainer = pl.Trainer(callbacks=[MyPrintingCallback()])
.. testcode::

class MyPrintingCallback(Callback):

def on_init_start(self, trainer):
print('Starting to init trainer!')

def on_init_end(self, trainer):
print('trainer is init now')

def on_train_end(self, trainer, pl_module):
print('do something when training ends')

trainer = Trainer(callbacks=[MyPrintingCallback()])

.. testoutput::

Starting to init trainer!
trainer is init now

Expand Down
35 changes: 29 additions & 6 deletions docs/source/child_modules.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,22 @@
.. testsetup:: *

import torch
from pytorch_lightning.trainer.trainer import Trainer
from pytorch_lightning.callbacks.base import Callback
from pytorch_lightning.core.lightning import LightningModule

class LitMNIST(LightningModule):

def __init__(self):
super().__init__()

def train_dataloader():
pass

def val_dataloader():
pass


Child Modules
-------------
Research projects tend to test different approaches to the same dataset.
Expand All @@ -7,13 +26,18 @@ For example, imagine we now want to train an Autoencoder to use as a feature ext
Recall that `LitMNIST` already defines all the dataloading etc... The only things
that change in the `Autoencoder` model are the init, forward, training, validation and test step.

.. code-block:: python
.. testcode::

class Encoder(torch.nn.Module):
...
pass

class Decoder(torch.nn.Module):
pass

class AutoEncoder(LitMNIST):

def __init__(self):
super().__init__()
self.encoder = Encoder()
self.decoder = Decoder()

Expand All @@ -30,10 +54,10 @@ that change in the `Autoencoder` model are the init, forward, training, validati
return loss

def validation_step(self, batch, batch_idx):
return self._shared_eval(batch, batch_idx, 'val'):
return self._shared_eval(batch, batch_idx, 'val')

def test_step(self, batch, batch_idx):
return self._shared_eval(batch, batch_idx, 'test'):
return self._shared_eval(batch, batch_idx, 'test')

def _shared_eval(self, batch, batch_idx, prefix):
x, y = batch
Expand All @@ -43,6 +67,7 @@ that change in the `Autoencoder` model are the init, forward, training, validati
loss = F.nll_loss(logits, y)
return {f'{prefix}_loss': loss}


and we can train this using the same trainer

.. code-block:: python
Expand All @@ -58,5 +83,3 @@ In this case, we want to use the `AutoEncoder` to extract image representations
some_images = torch.Tensor(32, 1, 28, 28)
representations = autoencoder(some_images)
..
24 changes: 14 additions & 10 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -309,7 +309,7 @@ def setup(app):
# https://stackoverflow.com/questions/15889621/sphinx-how-to-exclude-imports-in-automodule

MOCK_REQUIRE_PACKAGES = []
with open(os.path.join(PATH_ROOT, 'requirements.txt'), 'r') as fp:
with open(os.path.join(PATH_ROOT, 'requirements-extra.txt'), 'r') as fp:
for ln in fp.readlines():
found = [ln.index(ch) for ch in list(',=<>#') if ch in ln]
pkg = ln[:min(found)] if found else ln
Expand All @@ -318,19 +318,10 @@ def setup(app):

# TODO: better parse from package since the import name and package name may differ
MOCK_MANUAL_PACKAGES = [
'torch',
'torchvision',
'PIL',
'test_tube',
'mlflow',
'comet_ml',
'wandb',
'neptune',
'trains',
]
autodoc_mock_imports = MOCK_REQUIRE_PACKAGES + MOCK_MANUAL_PACKAGES
# for mod_name in MOCK_REQUIRE_PACKAGES:
# sys.modules[mod_name] = mock.Mock()


# Options for the linkcode extension
Expand Down Expand Up @@ -405,3 +396,16 @@ def find_source():
# Useful for avoiding ambiguity when the same section heading appears in different documents.
# http://www.sphinx-doc.org/en/master/usage/extensions/autosectionlabel.html
autosectionlabel_prefix_document = True

# only run doctests marked with a ".. doctest::" directive
doctest_test_doctest_blocks = ''
doctest_global_setup = """
import importlib
import os
import torch
TORCHVISION_AVAILABLE = importlib.util.find_spec('torchvision')
"""
coverage_skip_undoc_in_source = True
26 changes: 15 additions & 11 deletions docs/source/debugging.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
.. testsetup:: *

from pytorch_lightning.trainer.trainer import Trainer

Debugging
=========
The following are flags that make debugging much easier.
Expand All @@ -11,9 +15,9 @@ a full epoch to crash.
(See: :paramref:`~pytorch_lightning.trainer.trainer.Trainer.fast_dev_run`
argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)

.. code-block:: python
.. testcode::

trainer = pl.Trainer(fast_dev_run=True)
trainer = Trainer(fast_dev_run=True)

Inspect gradient norms
----------------------
Expand All @@ -22,10 +26,10 @@ Logs (to a logger), the norm of each weight matrix.
(See: :paramref:`~pytorch_lightning.trainer.trainer.Trainer.track_grad_norm`
argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)

.. code-block:: python
.. testcode::

# the 2-norm
trainer = pl.Trainer(track_grad_norm=2)
trainer = Trainer(track_grad_norm=2)

Log GPU usage
-------------
Expand All @@ -34,9 +38,9 @@ Logs (to a logger) the GPU usage for each GPU on the master machine.
(See: :paramref:`~pytorch_lightning.trainer.trainer.Trainer.log_gpu_memory`
argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)

.. code-block:: python
.. testcode::

trainer = pl.Trainer(log_gpu_memory=True)
trainer = Trainer(log_gpu_memory=True)

Make model overfit on subset of data
------------------------------------
Expand All @@ -47,9 +51,9 @@ and try to get your model to overfit. If it can't, it's a sign it won't work wit
(See: :paramref:`~pytorch_lightning.trainer.trainer.Trainer.overfit_pct`
argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)

.. code-block:: python
.. testcode::

trainer = pl.Trainer(overfit_pct=0.01)
trainer = Trainer(overfit_pct=0.01)

Print the parameter count by layer
----------------------------------
Expand All @@ -59,9 +63,9 @@ To disable this behavior, turn off this flag:
(See: :paramref:`~pytorch_lightning.trainer.trainer.Trainer.weights_summary`
argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)

.. code-block:: python
.. testcode::

trainer = pl.Trainer(weights_summary=None)
trainer = Trainer(weights_summary=None)


Set the number of validation sanity steps
Expand All @@ -72,7 +76,7 @@ This avoids crashing in the validation loop sometime deep into a lengthy trainin
(See: :paramref:`~pytorch_lightning.trainer.trainer.Trainer.num_sanity_val_steps`
argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)

.. code-block:: python
.. testcode::

# DEFAULT
trainer = Trainer(num_sanity_val_steps=5)
44 changes: 27 additions & 17 deletions docs/source/early_stopping.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
.. testsetup:: *

from pytorch_lightning.trainer.trainer import Trainer
from pytorch_lightning.callbacks.early_stopping import EarlyStopping


Early stopping
==============

Expand All @@ -17,23 +23,25 @@ Enable Early Stopping using Callbacks on epoch end
--------------------------------------------------
There are two ways to enable early stopping using callbacks on epoch end.

.. doctest::
- Set early_stop_callback to True. Will look for 'val_loss' in validation_epoch_end() return dict.
If it is not found an error is raised.

.. testcode::

trainer = Trainer(early_stop_callback=True)

- Or configure your own callback

>>> from pytorch_lightning import Trainer
>>> from pytorch_lightning.callbacks import EarlyStopping
.. testcode::

# A) Set early_stop_callback to True. Will look for 'val_loss'
# in validation_epoch_end() return dict. If it is not found an error is raised.
>>> trainer = Trainer(early_stop_callback=True)
# B) Or configure your own callback
>>> early_stop_callback = EarlyStopping(
... monitor='val_loss',
... min_delta=0.00,
... patience=3,
... verbose=False,
... mode='min'
... )
>>> trainer = Trainer(early_stop_callback=early_stop_callback)
early_stop_callback = EarlyStopping(
monitor='val_loss',
min_delta=0.00,
patience=3,
verbose=False,
mode='min'
)
trainer = Trainer(early_stop_callback=early_stop_callback)

In any case, the callback will fall back to the training metrics (returned in
:meth:`~pytorch_lightning.core.lightning.LightningModule.training_step`,
Expand All @@ -43,7 +51,8 @@ looking for a key to monitor if validation is disabled or
is not defined.

.. seealso::
:class:`~pytorch_lightning.trainer.trainer.Trainer`
- :class:`~pytorch_lightning.trainer.trainer.Trainer`
- :class:`~pytorch_lightning.callbacks.early_stopping.EarlyStopping`

Disable Early Stopping with callbacks on epoch end
--------------------------------------------------
Expand All @@ -53,4 +62,5 @@ Note that ``None`` will not disable early stopping but will lead to the
default behaviour.

.. seealso::
:class:`~pytorch_lightning.trainer.trainer.Trainer`
- :class:`~pytorch_lightning.trainer.trainer.Trainer`
- :class:`~pytorch_lightning.callbacks.early_stopping.EarlyStopping`
Loading

0 comments on commit a6de1b8

Please sign in to comment.