-
Notifications
You must be signed in to change notification settings - Fork 415
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix chebyshev scalariztaion #1616
Conversation
This pull request was exported from Phabricator. Differential Revision: D42373368 |
Outcomes are multiplied by -1, then normalized to [0,1] for maximization (or [-1,0] for minimization) | ||
and then an augmented Chebyshev scalarization is applied. | ||
|
||
Since typically the augmented chebyshev scalarization is minimized, we multiply the resulting quantity by -1. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Outcomes are multiplied by -1, then normalized to [0,1] for maximization (or [-1,0] for minimization) | |
and then an augmented Chebyshev scalarization is applied. | |
Since typically the augmented chebyshev scalarization is minimized, we multiply the resulting quantity by -1. | |
Outcomes are multiplied by -1 (since botorch by default assumes maximization | |
of the underlying outcomes), then normalized to [0,1] for maximization (or [-1,0] | |
for minimization) and then an augmented Chebyshev scalarization is applied. | |
Since typically the augmented Chebyshev scalarization is minimized, we | |
multiply the resulting quantity by -1. |
@@ -61,6 +63,7 @@ def get_chebyshev_scalarization( | |||
>>> weights = torch.tensor([0.75, -0.25]) | |||
>>> transform = get_aug_chebyshev_scalarization(weights, Y) | |||
""" | |||
Y = -Y |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a comment here? As well as below in the obj
definition for clarity?
This pull request was exported from Phabricator. Differential Revision: D42373368 |
90997db
to
ee4eb71
Compare
Summary: Pull Request resolved: pytorch#1616 See pytorch#1614 Differential Revision: D42373368 fbshipit-source-id: 5b705efadd3c1259e8f7fd73964c882f40ae5150
Codecov Report
@@ Coverage Diff @@
## main #1616 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 169 169
Lines 14518 14523 +5
=========================================
+ Hits 14518 14523 +5
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
Summary: Pull Request resolved: pytorch#1616 See pytorch#1614 Differential Revision: D42373368 fbshipit-source-id: 828dee51734cc54bf3f01c1876af05f9ae8b7201
ee4eb71
to
6f99023
Compare
This pull request was exported from Phabricator. Differential Revision: D42373368 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D42373368 |
Summary: Pull Request resolved: pytorch#1616 See pytorch#1614 Differential Revision: D42373368 fbshipit-source-id: 8a297e8fdfe3e237397a62acca556219ccd5c959
6f99023
to
36e70ea
Compare
Summary: Pull Request resolved: pytorch#1616 See pytorch#1614 Reviewed By: Balandat Differential Revision: D42373368 fbshipit-source-id: eace05709a824f16a6dfdac0e13d906cc5f8dfd1
36e70ea
to
3c5b1fd
Compare
This pull request was exported from Phabricator. Differential Revision: D42373368 |
This pull request has been merged in 3e18d2a. |
Summary: See #1614
Differential Revision: D42373368