-
Notifications
You must be signed in to change notification settings - Fork 415
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
make eta configurable #1526
make eta configurable #1526
Conversation
Codecov Report
@@ Coverage Diff @@
## main #1526 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 149 149
Lines 12954 12962 +8
=========================================
+ Hits 12954 12962 +8
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for putting this together. I think being backwards compatible would be preferable, so let's accept both Tensor & float. I left a few comments in line. This should be good to go if you update the docstrings (to explain what the tensor version is) and add units.
Thanks for the quick review, I will go through the codebase and change it accordingly. Best, Johannes |
Co-authored-by: Sait Cakmak <saitcakmak@outlook.com>
Co-authored-by: Sait Cakmak <saitcakmak@outlook.com>
Another thing which I just recognized: I think the doc string for from botorch.utils.objective import soft_eval_constraint, apply_constraints
import matplotlib.pyplot as plt
import numpy as np
import torch
x = np.linspace(-10,10,500)
xt = torch.from_numpy(x).unsqueeze(-1)
fig, ax = plt.subplots()
y = soft_eval_constraint(xt, eta=1e-3 )
ax.plot(xt, y, label="eta = 10e-3")
y = soft_eval_constraint(xt, eta=1e-2 )
ax.plot(xt, y, label="eta = 10e-2")
y = soft_eval_constraint(xt, eta=1e-1 )
ax.plot(xt, y, label="eta = 10e-1")
y = soft_eval_constraint(xt, eta=1 )
ax.plot(xt, y, label="eta = 1")
ax.legend()
plt.show() This gives the following output: Or do I understand something wrong? |
You're right, the docstring is wrong! Would you mind patching it as part of this PR? |
I just fixed it. Internally we call the inverse of the |
I added the possibility for different In addition, I added tests, for qnehvi one test is failing, namely if one uses an eta of 0.1 or 1. Then it shows a strange behavior. For eta=1, it even results in values larger than with the unconstrained one. This should not be. Maybe one of you can have a look, as I do not understand what is going on. The test is failing here: https://github.com/jduerholt/botorch/blob/bb5a56088759302041e7c818f4efe33559906c36/test/acquisition/multi_objective/test_monte_carlo.py#L1301 Any ideas? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll take a look at the test failure and follow up.
@saitcakmak has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Co-authored-by: Sait Cakmak <saitcakmak@outlook.com>
Co-authored-by: Sait Cakmak <saitcakmak@outlook.com>
Co-authored-by: Sait Cakmak <saitcakmak@outlook.com>
Hi @saitcakmak , thank you very much for your suggestions and especially your explanation regarding the base line partioning for qnehvi! I updated the tests accordingly, just tell me if more is needed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for making all the changes and thorough testing. LGTM!
@saitcakmak has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
constraints = [ | ||
[lambda Z: torch.ones_like(Z[..., -1])], | ||
[ | ||
lambda Z: torch.ones_like(Z[..., -1]), | ||
lambda Z: torch.ones_like(Z[..., -1]), | ||
], | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could make this a bit cleaner, but not really necessary
constraints = [ | |
[lambda Z: torch.ones_like(Z[..., -1])], | |
[ | |
lambda Z: torch.ones_like(Z[..., -1]), | |
lambda Z: torch.ones_like(Z[..., -1]), | |
], | |
] | |
def con_func(Z): | |
torch.ones_like(Z[..., -1]) | |
constraints = [[con_func], [con_func, con_func]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @Balandat , if you want I can change it accordingly, is this still possible as the PR is already imported to Phabricator?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah we can always reimport (not anymore for this one b/c it already got merged). So no worries for this one as this was really a minor nit.
@saitcakmak has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Motivation
I recently looked into the output constraint implementation in botorch and figured out that it behave like our custom
Objective
implementation for handling of constraints, namely by multiplying by sigmoids. Currently, the only difference is that we work often with differenteta
values per constraint. I think this would be a nice feature also forbotorch
.This PR is still work in progress, as the
apply_constraints
method is used at a lot of different occasions though-out the codebase, and my question is if one want to keep backwards compatibility. In the current PR, I kept the backwards compatibility and madeeta
of typeUnion[float, torch.Tensor]
. If one does this one has to always catch if just a float is provided and transform the float in a tensor of the same length as the list of constraint callables.Another option would be to set
eta
as optional with defaultNone
and then just generate a tensor with the old default of 10e-3.Which solution would you prefer?
Depending on your suggestion, I would finalize the PR and implement the functionality of different
eta
s per constraint though-out the whole codebase.Have you read the [Contributing Guidelines on pull requests]
Yes.
Test Plan
Unit tests.