diff --git a/CHANGELOG.md b/CHANGELOG.md index e7ac8b9ec..2879d22a5 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,15 +2,19 @@ ## master -**Cautious:** current `master` branch and `0.4.2.postX` version introduce tentative APIs which may be removed in the near future. Use version `0.4.2` for a more stable version. +### Important changes +- `tell` method can now receive a list/array of losses for multi-objective optimization [#775](https://github.com/facebookresearch/nevergrad/pull/775). For now it is neither robust, nor scalable, nor stable, nor optimal so be careful when using it. More information in the [documentation](https://facebookresearch.github.io/nevergrad/optimization.html#multiobjective-minimization-with-nevergrad). +- The old way to perform multiobjective optimization, through the use of :code:`MultiobjectiveFunction`, is now deprecated and will be removed i n version 0.4.3 [#1017](https://github.com/facebookresearch/nevergrad/pull/1017). - By default, the optimizer now returns the best set of parameter as recommendation [#951](https://github.com/facebookresearch/nevergrad/pull/951), considering that the function is deterministic. The previous behavior would use an estimation of noise to provide the pessimistic best point, leading to unexpected behaviors [#947](https://github.com/facebookresearch/nevergrad/pull/947). You can can back to this behavior by specifying: :code:`parametrization.descriptors.deterministic_function = False` + +### Other + +- `DE` and its variants have been updated to make full use of the multi-objective losses [#789](https://github.com/facebookresearch/nevergrad/pull/789). Other optimizers convert multiobjective problems to a volume minimization, which is not always as efficient. - as an **experimental** feature we have added some preliminary support for constraint management through penalties. From then on the prefered option for penalty is to register a function returning a positive float when the constraint is satisfied. While we will wait fore more testing before documenting it, this may already cause instabilities and errors when adding cheap constraints. Please open an issue if you encounter a problem. -- as an **experimental** feature, `tell` method can now receive a list/array of losses for multi-objective optimization [#775](https://github.com/facebookresearch/nevergrad/pull/775). For now it is neither robust, nor scalable, nor stable, nor optimal so be careful when using it. More information in the [documentation](https://facebookresearch.github.io/nevergrad/optimization.html#multiobjective-minimization-with-nevergrad). -- `DE` and its variants have been updated to make use of the multi-objective losses [#789](https://github.com/facebookresearch/nevergrad/pull/789). This is a **preliminary** fix since the initial `DE` implementaton was ill-suited for this use case. - `tell` argument `value` is renamed to `loss` for clarification [#774](https://github.com/facebookresearch/nevergrad/pull/774). This can be breaking when using named arguments! - `ExperimentFunction` now automatically records arguments used for their instantiation so that they can both be used to create a new copy, and as descriptors if there are of type int/bool/float/str [#914](https://github.com/facebookresearch/nevergrad/pull/914 [#914](https://github.com/facebookresearch/nevergrad/pull/914)). - from now on, code formatting needs to be [`black`](https://black.readthedocs.io/en/stable/) compliant. This is diff --git a/docs/optimization.rst b/docs/optimization.rst index 3e8ce429c..866103b97 100644 --- a/docs/optimization.rst +++ b/docs/optimization.rst @@ -228,18 +228,7 @@ Multiobjective minimization is a **work in progress** in :code:`nevergrad`. It i In other words, use it at your own risk ;) and provide feedbacks (both positive and negative) if you have any! - -The initial API that was added into :code:`nevergrad` to work with multiobjective functions uses a function wrapper to convert them into monoobjective functions. -Let us minimize :code:`f1` and :code:`f2` (two objective functions) assuming that values above 2.5 are of no interest: - -.. literalinclude:: ../nevergrad/functions/multiobjective/test_core.py - :language: python - :dedent: 4 - :start-after: DOC_MULTIOBJ_0 - :end-before: DOC_MULTIOBJ_1 - - -We are currently working on an **new experimental API** allowing users to directly :code:`tell` the results as an array or list of floats. When this API is stabilized and proved to work, it will probably replace the older one. Here is an example on how to use it: +To perform multiobjective optimization, you can just provide :code:`tell` with the results as an array or list of floats: .. literalinclude:: ../nevergrad/optimization/multiobjective/test_core.py :language: python @@ -247,8 +236,10 @@ We are currently working on an **new experimental API** allowing users to direct :start-after: DOC_MULTIOBJ_OPT_0 :end-before: DOC_MULTIOBJ_OPT_1 -Note that `DE` and its variants have been updated to make use of the multi-objective losses [#789](https://github.com/facebookresearch/nevergrad/pull/789). This is a **preliminary** fix since the initial `DE` implementaton was ill-suited for this use case. - +Currently most optimizers only derive a volume float loss from the multiobjective loss and minimize it. +:code:`DE` and its variants have however been updated to make use of the full multi-objective losses +[#789](https://github.com/facebookresearch/nevergrad/pull/789), which make them good candidates for multi-objective minimization (:code:`NGOpt` will +delegate to DE in the case of multi-objective functions). Reproducibility --------------- diff --git a/nevergrad/functions/multiobjective/core.py b/nevergrad/functions/multiobjective/core.py index 02976b057..bac910d8f 100644 --- a/nevergrad/functions/multiobjective/core.py +++ b/nevergrad/functions/multiobjective/core.py @@ -4,6 +4,7 @@ # LICENSE file in the root directory of this source tree. import random +import warnings import numpy as np import nevergrad.common.typing as tp from nevergrad.optimization.multiobjective import HypervolumeIndicator @@ -37,6 +38,13 @@ def __init__( multiobjective_function: tp.Callable[..., tp.ArrayLike], upper_bounds: tp.Optional[tp.ArrayLike] = None, ) -> None: + warnings.warn( + "MultiobjectiveFunction is deprecated and will be removed in v0.4.3 " + "because it is no more needed. You should just pass a multiobjective loss to " + "optimizer.tell.\nSee https://facebookresearch.github.io/nevergrad/" + "optimization.html#multiobjective-minimization-with-nevergrad\n", + DeprecationWarning, + ) self.multiobjective_function = multiobjective_function self._auto_bound = 0 self._auto_upper_bounds = np.array([-float("inf")]) diff --git a/nevergrad/optimization/multiobjective/test_core.py b/nevergrad/optimization/multiobjective/test_core.py index a0eb01321..02084fb77 100644 --- a/nevergrad/optimization/multiobjective/test_core.py +++ b/nevergrad/optimization/multiobjective/test_core.py @@ -101,6 +101,7 @@ def multiobjective(x): optimizer = ng.optimizers.CMA(parametrization=3, budget=100) + # for all but DE optimizers, deriving a volume out of the losses, # it's not strictly necessary but highly advised to provide an # upper bound reference for the losses (if not provided, such upper # bound is automatically inferred with the first few "tell")