Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixing some details for the tutorials to run. #997

Merged
merged 2 commits into from
Sep 23, 2022
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/src/tutorials/pytorch-mnist.rst
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ be called only once because Oríon only looks at 1 ``'objective'`` value per run

.. code-block:: python

test_error_rate = test(args, model, device, test_loader)
test_error_rate = test(model, device, test_loader)

report_objective(test_error_rate)

Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/scikit-learn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Sample script

.. literalinclude:: /../../examples/scikitlearn-iris/main.py
:language: python
:lines: 1-2, 5-9, 13-30
:lines: 1-9, 13-30

This very basic script takes in parameter one positional argument for the hyper-parameter *epsilon*
which control the loss in the script.
Expand Down
36 changes: 18 additions & 18 deletions examples/tutorials/code_1_python_api.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,13 +52,13 @@
# a ``ValueError`` will be raised. At least one of the results must have the type ``objective``,
# the metric that is minimized by the algorithm.

if __name__ == '__main__':
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one is part of the sphink-gallery, which automatically convert the script into a sphinx page and a jupiter notebook. Maybe there could be a warning for macos user to try it using the jupiter notebook or encapsulating the code in a main function and running under if __name__ == '__main__'.

def rosenbrock(x, noise=None):
"""Evaluate partial information of a quadratic."""
y = x - 34.56789
z = 4 * y**2 + 23.4

def rosenbrock(x, noise=None):
"""Evaluate partial information of a quadratic."""
y = x - 34.56789
z = 4 * y**2 + 23.4

return [{"name": "objective", "type": "objective", "value": z}]
return [{"name": "objective", "type": "objective", "value": z}]


#%%
Expand All @@ -67,12 +67,12 @@ def rosenbrock(x, noise=None):
# will iteratively try new sets of hyperparameters suggested by the optimization algorithm
# until it reaches 20 trials.

experiment.workon(rosenbrock, max_trials=20)
experiment.workon(rosenbrock, max_trials=20)

#%%
# Now let's plot the regret curve to see how well went the optimization.

experiment.plot.regret().show()
experiment.plot.regret().show()

#%%
# .. This file is produced by docs/scripts/build_database_and_plots.py
Expand All @@ -98,19 +98,19 @@ def rosenbrock(x, noise=None):
# that can easily find the optimal solution. We specify the algorithm configuration t
# :func:`build experiment <orion.client.build_experiment>`

experiment = build_experiment(
"tpe-rosenbrock",
space=space,
algorithms={"tpe": {"n_initial_points": 5}},
storage=storage,
)
experiment = build_experiment(
"tpe-rosenbrock",
space=space,
algorithms={"tpe": {"n_initial_points": 5}},
storage=storage,
)

#%%
# We then again run the optimization for 20 trials and plot the regret.
#%%
# We then again run the optimization for 20 trials and plot the regret.

experiment.workon(rosenbrock, max_trials=20)
experiment.workon(rosenbrock, max_trials=20)

experiment.plot.regret().show()
experiment.plot.regret().show()

# sphinx_gallery_thumbnail_path = '_static/python.png'

Expand Down
106 changes: 53 additions & 53 deletions examples/tutorials/code_2_hyperband_checkpoint.py
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ def build_data_loaders(batch_size, split_seed=1):
# Next, we write the function to save checkpoints. It is important to include
# not only the model in the checkpoint, but also the optimizer and the learning rate
# schedule when using one. In this example we will use the exponential learning rate schedule,
# so we checkpoint it. We save the current epoch as well so that we now where we resume from.
# so we checkpoint it. We save the current epoch as well so that we know where we resume from.


def save_checkpoint(checkpoint, model, optimizer, lr_scheduler, epoch):
Expand Down Expand Up @@ -259,8 +259,8 @@ def main(
#%%
# You can test the training pipeline before working with the hyperparameter optimization.


main(epochs=4)
if __name__ == '__main__':
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same thing here.

main(epochs=4)


#%%
Expand All @@ -277,58 +277,58 @@ def main(
# checkpoint file with ``f"{experiment.working_dir}/{trial.hash_params}"``.


from orion.client import build_experiment

from orion.client import build_experiment

def run_hpo():

# Specify the database where the experiments are stored. We use a local PickleDB here.
storage = {
"type": "legacy",
"database": {
"type": "pickleddb",
"host": "./db.pkl",
},
}
def run_hpo():

# Load the data for the specified experiment
experiment = build_experiment(
"hyperband-cifar10",
space={
"epochs": "fidelity(1, 120, base=4)",
"learning_rate": "loguniform(1e-5, 0.1)",
"momentum": "uniform(0, 0.9)",
"weight_decay": "loguniform(1e-10, 1e-2)",
"gamma": "loguniform(0.97, 1)",
},
algorithms={
"hyperband": {
"seed": 1,
"repetitions": 5,
# Specify the database where the experiments are stored. We use a local PickleDB here.
storage = {
"type": "legacy",
"database": {
"type": "pickleddb",
"host": "./db.pkl",
},
},
storage=storage,
)

trials = 1
while not experiment.is_done:
print("trial", trials)
trial = experiment.suggest()
if trial is None and experiment.is_done:
break
valid_error_rate = main(
**trial.params, checkpoint=f"{experiment.working_dir}/{trial.hash_params}"
}

# Load the data for the specified experiment
experiment = build_experiment(
"hyperband-cifar10",
space={
"epochs": "fidelity(1, 120, base=4)",
"learning_rate": "loguniform(1e-5, 0.1)",
"momentum": "uniform(0, 0.9)",
"weight_decay": "loguniform(1e-10, 1e-2)",
"gamma": "loguniform(0.97, 1)",
},
algorithms={
"hyperband": {
"seed": 1,
"repetitions": 5,
},
},
storage=storage,
)
experiment.observe(trial, valid_error_rate, name="valid_error_rate")
trials += 1

trials = 1
while not experiment.is_done:
print("trial", trials)
trial = experiment.suggest()
if trial is None and experiment.is_done:
break
valid_error_rate = main(
**trial.params, checkpoint=f"{experiment.working_dir}/{trial.hash_params}"
)
experiment.observe(trial, valid_error_rate, name="valid_error_rate")
trials += 1

#%%
# Let's run the optimization now. You may want to reduce the maximum number of epochs in
# ``fidelity(1, 120, base=4)`` and set the number of ``repetitions`` to 1 to get results more
# quickly. With current configuration, this example takes 2 days to run on a Titan RTX.

experiment = run_hpo()
#%%
# Let's run the optimization now. You may want to reduce the maximum number of epochs in
# ``fidelity(1, 120, base=4)`` and set the number of ``repetitions`` to 1 to get results more
# quickly. With current configuration, this example takes 2 days to run on a Titan RTX.

experiment = run_hpo()

#%%
# Analysis
Expand All @@ -340,8 +340,8 @@ def run_hpo():
# We should first look at the :ref:`sphx_glr_auto_examples_plot_1_regret.py`
# to verify the optimization with Hyperband.

fig = experiment.plot.regret()
fig.show()
fig = experiment.plot.regret()
fig.show()

#%%
# .. This file is produced by docs/scripts/build_database_and_plots.py
Expand All @@ -357,8 +357,8 @@ def run_hpo():
# lower than 10%. To see if the search space may be the issue, we first look at the
# :ref:`sphx_glr_auto_examples_plot_3_lpi.py`.

fig = experiment.plot.lpi()
fig.show()
fig = experiment.plot.lpi()
fig.show()

#%%
# .. raw:: html
Expand All @@ -370,8 +370,8 @@ def run_hpo():
# it is worth looking at the :ref:`sphx_glr_auto_examples_plot_4_partial_dependencies.py`
# to see if the search space was perhaps too narrow or too large.

fig = experiment.plot.partial_dependencies(params=["gamma", "learning_rate"])
fig.show()
fig = experiment.plot.partial_dependencies(params=["gamma", "learning_rate"])
fig.show()

# sphinx_gallery_thumbnail_path = '_static/restart.png'

Expand Down