Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make eta configurable #1526

Closed
wants to merge 14 commits into from
Closed

Conversation

jduerholt
Copy link
Contributor

Motivation

I recently looked into the output constraint implementation in botorch and figured out that it behave like our custom Objective implementation for handling of constraints, namely by multiplying by sigmoids. Currently, the only difference is that we work often with different eta values per constraint. I think this would be a nice feature also for botorch.

This PR is still work in progress, as the apply_constraints method is used at a lot of different occasions though-out the codebase, and my question is if one want to keep backwards compatibility. In the current PR, I kept the backwards compatibility and made eta of type Union[float, torch.Tensor]. If one does this one has to always catch if just a float is provided and transform the float in a tensor of the same length as the list of constraint callables.

Another option would be to set eta as optional with default None and then just generate a tensor with the old default of 10e-3.

Which solution would you prefer?

Depending on your suggestion, I would finalize the PR and implement the functionality of different etas per constraint though-out the whole codebase.

Have you read the [Contributing Guidelines on pull requests]

Yes.

Test Plan

Unit tests.

@facebook-github-bot facebook-github-bot added the CLA Signed Do not delete this pull request or issue due to inactivity. label Nov 28, 2022
@codecov
Copy link

codecov bot commented Nov 28, 2022

Codecov Report

Merging #1526 (a591f49) into main (0fb00ef) will not change coverage.
The diff coverage is 100.00%.

❗ Current head a591f49 differs from pull request most recent head 73b5048. Consider uploading reports for the commit 73b5048 to get more accurate results

@@            Coverage Diff            @@
##              main     #1526   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files          149       149           
  Lines        12954     12962    +8     
=========================================
+ Hits         12954     12962    +8     
Impacted Files Coverage Δ
...orch/acquisition/multi_objective/multi_fidelity.py 100.00% <ø> (ø)
botorch/acquisition/multi_objective/monte_carlo.py 100.00% <100.00%> (ø)
botorch/acquisition/objective.py 100.00% <100.00%> (ø)
botorch/utils/objective.py 100.00% <100.00%> (ø)

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

Copy link
Contributor

@saitcakmak saitcakmak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for putting this together. I think being backwards compatible would be preferable, so let's accept both Tensor & float. I left a few comments in line. This should be good to go if you update the docstrings (to explain what the tensor version is) and add units.

@jduerholt
Copy link
Contributor Author

Thanks for the quick review, I will go through the codebase and change it accordingly. Best, Johannes

jduerholt and others added 3 commits November 28, 2022 21:27
Co-authored-by: Sait Cakmak <saitcakmak@outlook.com>
Co-authored-by: Sait Cakmak <saitcakmak@outlook.com>
@jduerholt
Copy link
Contributor Author

Another thing which I just recognized: I think the doc string for soft_eval_constraint is wrong. It is stated: As eta grows larger, this approximates the Heaviside step function. The opposite is the case: as eta decreases, it approximates the Heaviside step function, or?

from botorch.utils.objective import soft_eval_constraint, apply_constraints
import matplotlib.pyplot as plt
import numpy as np
import torch

x = np.linspace(-10,10,500)
xt = torch.from_numpy(x).unsqueeze(-1)

fig, ax = plt.subplots()

y = soft_eval_constraint(xt, eta=1e-3 )
ax.plot(xt, y, label="eta = 10e-3")
y = soft_eval_constraint(xt, eta=1e-2 )
ax.plot(xt, y, label="eta = 10e-2")
y = soft_eval_constraint(xt, eta=1e-1 )
ax.plot(xt, y, label="eta = 10e-1")
y = soft_eval_constraint(xt, eta=1 )
ax.plot(xt, y, label="eta = 1")

ax.legend()
plt.show()

This gives the following output:

image

Or do I understand something wrong?

@Balandat
Copy link
Contributor

You're right, the docstring is wrong! Would you mind patching it as part of this PR?

@jduerholt
Copy link
Contributor Author

You're right, the docstring is wrong! Would you mind patching it as part of this PR?

I just fixed it. Internally we call the inverse of the eta parameter steepness, which makes it at least for me clearer what it is doing.

@jduerholt
Copy link
Contributor Author

I added the possibility for different etas now to the ConstrainedMCObjective and into qehvi/qnehevi and the derived methods.

In addition, I added tests, for qnehvi one test is failing, namely if one uses an eta of 0.1 or 1. Then it shows a strange behavior. For eta=1, it even results in values larger than with the unconstrained one. This should not be. Maybe one of you can have a look, as I do not understand what is going on. The test is failing here: https://github.com/jduerholt/botorch/blob/bb5a56088759302041e7c818f4efe33559906c36/test/acquisition/multi_objective/test_monte_carlo.py#L1301

Any ideas?

Copy link
Contributor

@saitcakmak saitcakmak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll take a look at the test failure and follow up.

@facebook-github-bot
Copy link
Contributor

@saitcakmak has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

jduerholt and others added 7 commits November 30, 2022 07:56
@jduerholt
Copy link
Contributor Author

Hi @saitcakmak , thank you very much for your suggestions and especially your explanation regarding the base line partioning for qnehvi! I updated the tests accordingly, just tell me if more is needed.

Copy link
Contributor

@saitcakmak saitcakmak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for making all the changes and thorough testing. LGTM!

@facebook-github-bot
Copy link
Contributor

@saitcakmak has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

Comment on lines +1241 to +1247
constraints = [
[lambda Z: torch.ones_like(Z[..., -1])],
[
lambda Z: torch.ones_like(Z[..., -1]),
lambda Z: torch.ones_like(Z[..., -1]),
],
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could make this a bit cleaner, but not really necessary

Suggested change
constraints = [
[lambda Z: torch.ones_like(Z[..., -1])],
[
lambda Z: torch.ones_like(Z[..., -1]),
lambda Z: torch.ones_like(Z[..., -1]),
],
]
def con_func(Z):
torch.ones_like(Z[..., -1])
constraints = [[con_func], [con_func, con_func]]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Balandat , if you want I can change it accordingly, is this still possible as the PR is already imported to Phabricator?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah we can always reimport (not anymore for this one b/c it already got merged). So no worries for this one as this was really a minor nit.

@facebook-github-bot
Copy link
Contributor

@saitcakmak has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed Do not delete this pull request or issue due to inactivity.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants