Skip to content

Commit

Permalink
Update multiobjective doc, deprecate MultiobjectiveFunction (#1017)
Browse files Browse the repository at this point in the history
  • Loading branch information
jrapin authored Jan 18, 2021
1 parent 296c5e1 commit c7d0b3f
Show file tree
Hide file tree
Showing 4 changed files with 21 additions and 17 deletions.
10 changes: 7 additions & 3 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,19 @@

## master

**Cautious:** current `master` branch and `0.4.2.postX` version introduce tentative APIs which may be removed in the near future. Use version `0.4.2` for a more stable version.

### Important changes
- `tell` method can now receive a list/array of losses for multi-objective optimization [#775](https://github.com/facebookresearch/nevergrad/pull/775). For now it is neither robust, nor scalable, nor stable, nor optimal so be careful when using it. More information in the [documentation](https://facebookresearch.github.io/nevergrad/optimization.html#multiobjective-minimization-with-nevergrad).
- The old way to perform multiobjective optimization, through the use of :code:`MultiobjectiveFunction`, is now deprecated and will be removed i n version 0.4.3 [#1017](https://github.com/facebookresearch/nevergrad/pull/1017).
- By default, the optimizer now returns the best set of parameter as recommendation [#951](https://github.com/facebookresearch/nevergrad/pull/951), considering that the function is deterministic. The previous behavior would use an estimation of noise to provide the pessimistic best point, leading to unexpected behaviors [#947](https://github.com/facebookresearch/nevergrad/pull/947). You can can back to this behavior by specifying: :code:`parametrization.descriptors.deterministic_function = False`

### Other

- `DE` and its variants have been updated to make full use of the multi-objective losses [#789](https://github.com/facebookresearch/nevergrad/pull/789). Other optimizers convert multiobjective problems to a volume minimization, which is not always as efficient.
- as an **experimental** feature we have added some preliminary support for constraint management through penalties.
From then on the prefered option for penalty is to register a function returning a positive float when the constraint is satisfied.
While we will wait fore more testing before documenting it, this may already cause instabilities and errors when adding cheap constraints.
Please open an issue if you encounter a problem.
- as an **experimental** feature, `tell` method can now receive a list/array of losses for multi-objective optimization [#775](https://github.com/facebookresearch/nevergrad/pull/775). For now it is neither robust, nor scalable, nor stable, nor optimal so be careful when using it. More information in the [documentation](https://facebookresearch.github.io/nevergrad/optimization.html#multiobjective-minimization-with-nevergrad).
- `DE` and its variants have been updated to make use of the multi-objective losses [#789](https://github.com/facebookresearch/nevergrad/pull/789). This is a **preliminary** fix since the initial `DE` implementaton was ill-suited for this use case.
- `tell` argument `value` is renamed to `loss` for clarification [#774](https://github.com/facebookresearch/nevergrad/pull/774). This can be breaking when using named arguments!
- `ExperimentFunction` now automatically records arguments used for their instantiation so that they can both be used to create a new copy, and as descriptors if there are of type int/bool/float/str [#914](https://github.com/facebookresearch/nevergrad/pull/914 [#914](https://github.com/facebookresearch/nevergrad/pull/914)).
- from now on, code formatting needs to be [`black`](https://black.readthedocs.io/en/stable/) compliant. This is
Expand Down
19 changes: 5 additions & 14 deletions docs/optimization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -228,27 +228,18 @@ Multiobjective minimization is a **work in progress** in :code:`nevergrad`. It i

In other words, use it at your own risk ;) and provide feedbacks (both positive and negative) if you have any!


The initial API that was added into :code:`nevergrad` to work with multiobjective functions uses a function wrapper to convert them into monoobjective functions.
Let us minimize :code:`f1` and :code:`f2` (two objective functions) assuming that values above 2.5 are of no interest:

.. literalinclude:: ../nevergrad/functions/multiobjective/test_core.py
:language: python
:dedent: 4
:start-after: DOC_MULTIOBJ_0
:end-before: DOC_MULTIOBJ_1


We are currently working on an **new experimental API** allowing users to directly :code:`tell` the results as an array or list of floats. When this API is stabilized and proved to work, it will probably replace the older one. Here is an example on how to use it:
To perform multiobjective optimization, you can just provide :code:`tell` with the results as an array or list of floats:

.. literalinclude:: ../nevergrad/optimization/multiobjective/test_core.py
:language: python
:dedent: 4
:start-after: DOC_MULTIOBJ_OPT_0
:end-before: DOC_MULTIOBJ_OPT_1

Note that `DE` and its variants have been updated to make use of the multi-objective losses [#789](https://github.com/facebookresearch/nevergrad/pull/789). This is a **preliminary** fix since the initial `DE` implementaton was ill-suited for this use case.

Currently most optimizers only derive a volume float loss from the multiobjective loss and minimize it.
:code:`DE` and its variants have however been updated to make use of the full multi-objective losses
[#789](https://github.com/facebookresearch/nevergrad/pull/789), which make them good candidates for multi-objective minimization (:code:`NGOpt` will
delegate to DE in the case of multi-objective functions).

Reproducibility
---------------
Expand Down
8 changes: 8 additions & 0 deletions nevergrad/functions/multiobjective/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
# LICENSE file in the root directory of this source tree.

import random
import warnings
import numpy as np
import nevergrad.common.typing as tp
from nevergrad.optimization.multiobjective import HypervolumeIndicator
Expand Down Expand Up @@ -37,6 +38,13 @@ def __init__(
multiobjective_function: tp.Callable[..., tp.ArrayLike],
upper_bounds: tp.Optional[tp.ArrayLike] = None,
) -> None:
warnings.warn(
"MultiobjectiveFunction is deprecated and will be removed in v0.4.3 "
"because it is no more needed. You should just pass a multiobjective loss to "
"optimizer.tell.\nSee https://facebookresearch.github.io/nevergrad/"
"optimization.html#multiobjective-minimization-with-nevergrad\n",
DeprecationWarning,
)
self.multiobjective_function = multiobjective_function
self._auto_bound = 0
self._auto_upper_bounds = np.array([-float("inf")])
Expand Down
1 change: 1 addition & 0 deletions nevergrad/optimization/multiobjective/test_core.py
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,7 @@ def multiobjective(x):

optimizer = ng.optimizers.CMA(parametrization=3, budget=100)

# for all but DE optimizers, deriving a volume out of the losses,
# it's not strictly necessary but highly advised to provide an
# upper bound reference for the losses (if not provided, such upper
# bound is automatically inferred with the first few "tell")
Expand Down

0 comments on commit c7d0b3f

Please sign in to comment.