-
Notifications
You must be signed in to change notification settings - Fork 415
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move consolidate_duplicates to BoTorch and consolidate duplicates in PairwiseGP #1754
Conversation
This pull request was exported from Phabricator. Differential Revision: D44126864 |
…PairwiseGP (facebook#1536) Summary: Pull Request resolved: facebook#1536 X-link: pytorch/botorch#1754 # Context One problem for GP models is that when evaluating points that are close, it is likely to trigger numerical issues resulted from non-PSD covariance matrix. The problem is particularly pronounced and hard to bypass when doing optimization (either BOPE or preferential BO) as we would need to repetitively compare points to the incumbent. To improve preference learning stability, we can automatically consolidate the same (or numerically similar points) into the same point. For example, with training data `datapoints = [[1, 2], [3, 4], [1, 2], [5, 6]]` and `comparisons = [[0, 1], [2, 3]]` with be turned into the consolidated `datapoints = [[1, 2], [3, 4], [5, 6]]` and `comparisons = [[0, 1], [0, 2]]`. This shouldn't lead to any changes model fitting as the likelihood remains the same. # Code changes To implement this, following changes are made - Upstreamed the `consolidate_duplicates` and related helper functions from `Ax` to `Botorch`. - Implicitly replace `datapoint` and `comparisons` in `PairwiseGP` with the consolidated ones. - Added `unconsolidated_datapoints`, `unconsolidated_comparisons`, and `unconsolidated_utility` in case the user would like to access the original data and the corresponding utility directly from the model. Reviewed By: Balandat Differential Revision: D44126864 fbshipit-source-id: ded3409082b8e7efbae95b93ef04b035fcc7b63e
…PairwiseGP (pytorch#1536) Summary: X-link: facebook/Ax#1536 Pull Request resolved: pytorch#1754 # Context One problem for GP models is that when evaluating points that are close, it is likely to trigger numerical issues resulted from non-PSD covariance matrix. The problem is particularly pronounced and hard to bypass when doing optimization (either BOPE or preferential BO) as we would need to repetitively compare points to the incumbent. To improve preference learning stability, we can automatically consolidate the same (or numerically similar points) into the same point. For example, with training data `datapoints = [[1, 2], [3, 4], [1, 2], [5, 6]]` and `comparisons = [[0, 1], [2, 3]]` with be turned into the consolidated `datapoints = [[1, 2], [3, 4], [5, 6]]` and `comparisons = [[0, 1], [0, 2]]`. This shouldn't lead to any changes model fitting as the likelihood remains the same. # Code changes To implement this, following changes are made - Upstreamed the `consolidate_duplicates` and related helper functions from `Ax` to `Botorch`. - Implicitly replace `datapoint` and `comparisons` in `PairwiseGP` with the consolidated ones. - Added `unconsolidated_datapoints`, `unconsolidated_comparisons`, and `unconsolidated_utility` in case the user would like to access the original data and the corresponding utility directly from the model. Reviewed By: Balandat Differential Revision: D44126864 fbshipit-source-id: 12ad230f34b898a0c705f5e71521405211b47ac5
This pull request was exported from Phabricator. Differential Revision: D44126864 |
f6e6fbc
to
96b348b
Compare
This pull request was exported from Phabricator. Differential Revision: D44126864 |
…PairwiseGP (pytorch#1536) Summary: X-link: facebook/Ax#1536 Pull Request resolved: pytorch#1754 # Context One problem for GP models is that when evaluating points that are close, it is likely to trigger numerical issues resulted from non-PSD covariance matrix. The problem is particularly pronounced and hard to bypass when doing optimization (either BOPE or preferential BO) as we would need to repetitively compare points to the incumbent. To improve preference learning stability, we can automatically consolidate the same (or numerically similar points) into the same point. For example, with training data `datapoints = [[1, 2], [3, 4], [1, 2], [5, 6]]` and `comparisons = [[0, 1], [2, 3]]` with be turned into the consolidated `datapoints = [[1, 2], [3, 4], [5, 6]]` and `comparisons = [[0, 1], [0, 2]]`. This shouldn't lead to any changes model fitting as the likelihood remains the same. # Code changes To implement this, following changes are made - Upstreamed the `consolidate_duplicates` and related helper functions from `Ax` to `Botorch`. - Implicitly replace `datapoint` and `comparisons` in `PairwiseGP` with the consolidated ones. - Added `unconsolidated_datapoints`, `unconsolidated_comparisons`, and `unconsolidated_utility` in case the user would like to access the original data and the corresponding utility directly from the model. Reviewed By: Balandat Differential Revision: D44126864 fbshipit-source-id: 921e421e109df0691410c144375ff0f3aa8aaa5d
96b348b
to
e6d1e77
Compare
…PairwiseGP (facebook#1536) Summary: Pull Request resolved: facebook#1536 X-link: pytorch/botorch#1754 # Context One problem for GP models is that when evaluating points that are close, it is likely to trigger numerical issues resulted from non-PSD covariance matrix. The problem is particularly pronounced and hard to bypass when doing optimization (either BOPE or preferential BO) as we would need to repetitively compare points to the incumbent. To improve preference learning stability, we can automatically consolidate the same (or numerically similar points) into the same point. For example, with training data `datapoints = [[1, 2], [3, 4], [1, 2], [5, 6]]` and `comparisons = [[0, 1], [2, 3]]` with be turned into the consolidated `datapoints = [[1, 2], [3, 4], [5, 6]]` and `comparisons = [[0, 1], [0, 2]]`. This shouldn't lead to any changes model fitting as the likelihood remains the same. # Code changes To implement this, following changes are made - Upstreamed the `consolidate_duplicates` and related helper functions from `Ax` to `Botorch`. - Implicitly replace `datapoint` and `comparisons` in `PairwiseGP` with the consolidated ones. - Added `unconsolidated_datapoints`, `unconsolidated_comparisons`, and `unconsolidated_utility` in case the user would like to access the original data and the corresponding utility directly from the model. Reviewed By: Balandat Differential Revision: D44126864 fbshipit-source-id: dc0b25d53a859f27226192d3efa61036c4948cc1
Codecov Report
@@ Coverage Diff @@
## main #1754 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 170 170
Lines 14636 14695 +59
=========================================
+ Hits 14636 14695 +59
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
…PairwiseGP (facebook#1536) Summary: Pull Request resolved: facebook#1536 X-link: pytorch/botorch#1754 # Context One problem for GP models is that when evaluating points that are close, it is likely to trigger numerical issues resulted from non-PSD covariance matrix. The problem is particularly pronounced and hard to bypass when doing optimization (either BOPE or preferential BO) as we would need to repetitively compare points to the incumbent. To improve preference learning stability, we can automatically consolidate the same (or numerically similar points) into the same point. For example, with training data `datapoints = [[1, 2], [3, 4], [1, 2], [5, 6]]` and `comparisons = [[0, 1], [2, 3]]` with be turned into the consolidated `datapoints = [[1, 2], [3, 4], [5, 6]]` and `comparisons = [[0, 1], [0, 2]]`. This shouldn't lead to any changes model fitting as the likelihood remains the same. # Code changes To implement this, following changes are made - Upstreamed the `consolidate_duplicates` and related helper functions from `Ax` to `Botorch`. - Implicitly replace `datapoint` and `comparisons` in `PairwiseGP` with the consolidated ones. - Added `unconsolidated_datapoints`, `unconsolidated_comparisons`, and `unconsolidated_utility` in case the user would like to access the original data and the corresponding utility directly from the model. Reviewed By: Balandat Differential Revision: D44126864 fbshipit-source-id: 66723a36ce31b4cb599e2b8e413fbd42b9c27998
…PairwiseGP (pytorch#1536) Summary: X-link: facebook/Ax#1536 Pull Request resolved: pytorch#1754 # Context One problem for GP models is that when evaluating points that are close, it is likely to trigger numerical issues resulted from non-PSD covariance matrix. The problem is particularly pronounced and hard to bypass when doing optimization (either BOPE or preferential BO) as we would need to repetitively compare points to the incumbent. To improve preference learning stability, we can automatically consolidate the same (or numerically similar points) into the same point. For example, with training data `datapoints = [[1, 2], [3, 4], [1, 2], [5, 6]]` and `comparisons = [[0, 1], [2, 3]]` with be turned into the consolidated `datapoints = [[1, 2], [3, 4], [5, 6]]` and `comparisons = [[0, 1], [0, 2]]`. This shouldn't lead to any changes model fitting as the likelihood remains the same. # Code changes To implement this, following changes are made - Upstreamed the `consolidate_duplicates` and related helper functions from `Ax` to `Botorch`. - Implicitly replace `datapoint` and `comparisons` in `PairwiseGP` with the consolidated ones. - Added `unconsolidated_datapoints`, `unconsolidated_comparisons`, and `unconsolidated_utility` in case the user would like to access the original data and the corresponding utility directly from the model. Reviewed By: Balandat Differential Revision: D44126864 fbshipit-source-id: 495da3ea5083dbd3ebebe7946e9ebf0cbc5a64cf
e6d1e77
to
de7568d
Compare
This pull request was exported from Phabricator. Differential Revision: D44126864 |
…PairwiseGP (facebook#1536) Summary: Pull Request resolved: facebook#1536 X-link: pytorch/botorch#1754 # Context One problem for GP models is that when evaluating points that are close, it is likely to trigger numerical issues resulted from non-PSD covariance matrix. The problem is particularly pronounced and hard to bypass when doing optimization (either BOPE or preferential BO) as we would need to repetitively compare points to the incumbent. To improve preference learning stability, we can automatically consolidate the same (or numerically similar points) into the same point. For example, with training data `datapoints = [[1, 2], [3, 4], [1, 2], [5, 6]]` and `comparisons = [[0, 1], [2, 3]]` with be turned into the consolidated `datapoints = [[1, 2], [3, 4], [5, 6]]` and `comparisons = [[0, 1], [0, 2]]`. This shouldn't lead to any changes model fitting as the likelihood remains the same. # Code changes To implement this, following changes are made - Upstreamed the `consolidate_duplicates` and related helper functions from `Ax` to `Botorch`. - Implicitly replace `datapoint` and `comparisons` in `PairwiseGP` with the consolidated ones. - Added `unconsolidated_datapoints`, `unconsolidated_comparisons`, and `unconsolidated_utility` in case the user would like to access the original data and the corresponding utility directly from the model. Reviewed By: Balandat Differential Revision: D44126864 fbshipit-source-id: 4592063b3581e7dcec0ad92132159e2811eeccce
…PairwiseGP (pytorch#1536) Summary: X-link: facebook/Ax#1536 Pull Request resolved: pytorch#1754 # Context One problem for GP models is that when evaluating points that are close, it is likely to trigger numerical issues resulted from non-PSD covariance matrix. The problem is particularly pronounced and hard to bypass when doing optimization (either BOPE or preferential BO) as we would need to repetitively compare points to the incumbent. To improve preference learning stability, we can automatically consolidate the same (or numerically similar points) into the same point. For example, with training data `datapoints = [[1, 2], [3, 4], [1, 2], [5, 6]]` and `comparisons = [[0, 1], [2, 3]]` with be turned into the consolidated `datapoints = [[1, 2], [3, 4], [5, 6]]` and `comparisons = [[0, 1], [0, 2]]`. This shouldn't lead to any changes model fitting as the likelihood remains the same. # Code changes To implement this, following changes are made - Upstreamed the `consolidate_duplicates` and related helper functions from `Ax` to `Botorch`. - Implicitly replace `datapoint` and `comparisons` in `PairwiseGP` with the consolidated ones. - Added `unconsolidated_datapoints`, `unconsolidated_comparisons`, and `unconsolidated_utility` in case the user would like to access the original data and the corresponding utility directly from the model. Reviewed By: Balandat Differential Revision: D44126864 fbshipit-source-id: b0371eb530cf89821f3a6f90b59ebfb04871a671
de7568d
to
1c2f564
Compare
This pull request was exported from Phabricator. Differential Revision: D44126864 |
…PairwiseGP (pytorch#1536) Summary: X-link: facebook/Ax#1536 Pull Request resolved: pytorch#1754 # Context One problem for GP models is that when evaluating points that are close, it is likely to trigger numerical issues resulted from non-PSD covariance matrix. The problem is particularly pronounced and hard to bypass when doing optimization (either BOPE or preferential BO) as we would need to repetitively compare points to the incumbent. To improve preference learning stability, we can automatically consolidate the same (or numerically similar points) into the same point. For example, with training data `datapoints = [[1, 2], [3, 4], [1, 2], [5, 6]]` and `comparisons = [[0, 1], [2, 3]]` with be turned into the consolidated `datapoints = [[1, 2], [3, 4], [5, 6]]` and `comparisons = [[0, 1], [0, 2]]`. This shouldn't lead to any changes model fitting as the likelihood remains the same. # Code changes To implement this, following changes are made - Upstreamed the `consolidate_duplicates` and related helper functions from `Ax` to `Botorch`. - Implicitly replace `datapoint` and `comparisons` in `PairwiseGP` with the consolidated ones. - Added `unconsolidated_datapoints`, `unconsolidated_comparisons`, and `unconsolidated_utility` in case the user would like to access the original data and the corresponding utility directly from the model. Differential Revision: https://www.internalfb.com/diff/D44126864?entry_point=27 fbshipit-source-id: e333cdfb7f53464422c56deb590700aae9219ca4
1c2f564
to
f5f3481
Compare
…PairwiseGP (pytorch#1536) Summary: X-link: facebook/Ax#1536 Pull Request resolved: pytorch#1754 # Context One problem for GP models is that when evaluating points that are close, it is likely to trigger numerical issues resulted from non-PSD covariance matrix. The problem is particularly pronounced and hard to bypass when doing optimization (either BOPE or preferential BO) as we would need to repetitively compare points to the incumbent. To improve preference learning stability, we can automatically consolidate the same (or numerically similar points) into the same point. For example, with training data `datapoints = [[1, 2], [3, 4], [1, 2], [5, 6]]` and `comparisons = [[0, 1], [2, 3]]` with be turned into the consolidated `datapoints = [[1, 2], [3, 4], [5, 6]]` and `comparisons = [[0, 1], [0, 2]]`. This shouldn't lead to any changes model fitting as the likelihood remains the same. # Code changes To implement this, following changes are made - Upstreamed the `consolidate_duplicates` and related helper functions from `Ax` to `Botorch`. - Implicitly replace `datapoint` and `comparisons` in `PairwiseGP` with the consolidated ones. - Added `unconsolidated_datapoints`, `unconsolidated_comparisons`, and `unconsolidated_utility` in case the user would like to access the original data and the corresponding utility directly from the model. Reviewed By: Balandat Differential Revision: D44126864 fbshipit-source-id: 40316a65dd7ae2d077aaa6200d7479328cb69836
This pull request was exported from Phabricator. Differential Revision: D44126864 |
…PairwiseGP (facebook#1536) Summary: Pull Request resolved: facebook#1536 X-link: pytorch/botorch#1754 # Context One problem for GP models is that when evaluating points that are close, it is likely to trigger numerical issues resulted from non-PSD covariance matrix. The problem is particularly pronounced and hard to bypass when doing optimization (either BOPE or preferential BO) as we would need to repetitively compare points to the incumbent. To improve preference learning stability, we can automatically consolidate the same (or numerically similar points) into the same point. For example, with training data `datapoints = [[1, 2], [3, 4], [1, 2], [5, 6]]` and `comparisons = [[0, 1], [2, 3]]` with be turned into the consolidated `datapoints = [[1, 2], [3, 4], [5, 6]]` and `comparisons = [[0, 1], [0, 2]]`. This shouldn't lead to any changes model fitting as the likelihood remains the same. # Code changes To implement this, following changes are made - Upstreamed the `consolidate_duplicates` and related helper functions from `Ax` to `Botorch`. - Implicitly replace `datapoint` and `comparisons` in `PairwiseGP` with the consolidated ones. - Added `unconsolidated_datapoints`, `unconsolidated_comparisons`, and `unconsolidated_utility` in case the user would like to access the original data and the corresponding utility directly from the model. Reviewed By: Balandat Differential Revision: D44126864 fbshipit-source-id: 697b8c70bab49af4515f0f589be67b0a227f035e
This pull request was exported from Phabricator. Differential Revision: D44126864 |
…PairwiseGP (pytorch#1754) Summary: Pull Request resolved: pytorch#1754 # Context One problem for GP models is that when evaluating points that are close, it is likely to trigger numerical issues resulted from non-PSD covariance matrix. The problem is particularly pronounced and hard to bypass when doing optimization (either BOPE or preferential BO) as we would need to repetitively compare points to the incumbent. To improve preference learning stability, we can automatically consolidate the same (or numerically similar points) into the same point. For example, with training data `datapoints = [[1, 2], [3, 4], [1, 2], [5, 6]]` and `comparisons = [[0, 1], [2, 3]]` with be turned into the consolidated `datapoints = [[1, 2], [3, 4], [5, 6]]` and `comparisons = [[0, 1], [0, 2]]`. This shouldn't lead to any changes model fitting as the likelihood remains the same. # Code changes To implement this, following changes are made - Upstreamed the `consolidate_duplicates` and related helper functions from `Ax` to `Botorch`. - Implicitly replace `datapoint` and `comparisons` in `PairwiseGP` with the consolidated ones. - Added `unconsolidated_datapoints`, `unconsolidated_comparisons`, and `unconsolidated_utility` in case the user would like to access the original data and the corresponding utility directly from the model. Reviewed By: Balandat Differential Revision: D44126864 fbshipit-source-id: 3cc2e9ba510257f1abcc16ff8deca749b911288e
f5f3481
to
deff626
Compare
This pull request was exported from Phabricator. Differential Revision: D44126864 |
deff626
to
017bc77
Compare
…PairwiseGP (pytorch#1536) Summary: X-link: facebook/Ax#1536 Pull Request resolved: pytorch#1754 # Context One problem for GP models is that when evaluating points that are close, it is likely to trigger numerical issues resulted from non-PSD covariance matrix. The problem is particularly pronounced and hard to bypass when doing optimization (either BOPE or preferential BO) as we would need to repetitively compare points to the incumbent. To improve preference learning stability, we can automatically consolidate the same (or numerically similar points) into the same point. For example, with training data `datapoints = [[1, 2], [3, 4], [1, 2], [5, 6]]` and `comparisons = [[0, 1], [2, 3]]` with be turned into the consolidated `datapoints = [[1, 2], [3, 4], [5, 6]]` and `comparisons = [[0, 1], [0, 2]]`. This shouldn't lead to any changes model fitting as the likelihood remains the same. # Code changes To implement this, following changes are made - Upstreamed the `consolidate_duplicates` and related helper functions from `Ax` to `Botorch`. - Implicitly replace `datapoint` and `comparisons` in `PairwiseGP` with the consolidated ones. - Added `unconsolidated_datapoints`, `unconsolidated_comparisons`, and `unconsolidated_utility` in case the user would like to access the original data and the corresponding utility directly from the model. Reviewed By: Balandat Differential Revision: D44126864 fbshipit-source-id: 956a5a7b85c07fa0b5e1607bd5eddb84cce20e7b
…PairwiseGP (pytorch#1536) Summary: X-link: facebook/Ax#1536 Pull Request resolved: pytorch#1754 # Context One problem for GP models is that when evaluating points that are close, it is likely to trigger numerical issues resulted from non-PSD covariance matrix. The problem is particularly pronounced and hard to bypass when doing optimization (either BOPE or preferential BO) as we would need to repetitively compare points to the incumbent. To improve preference learning stability, we can automatically consolidate the same (or numerically similar points) into the same point. For example, with training data `datapoints = [[1, 2], [3, 4], [1, 2], [5, 6]]` and `comparisons = [[0, 1], [2, 3]]` with be turned into the consolidated `datapoints = [[1, 2], [3, 4], [5, 6]]` and `comparisons = [[0, 1], [0, 2]]`. This shouldn't lead to any changes model fitting as the likelihood remains the same. # Code changes To implement this, following changes are made - Upstreamed the `consolidate_duplicates` and related helper functions from `Ax` to `Botorch`. - Implicitly replace `datapoint` and `comparisons` in `PairwiseGP` with the consolidated ones. - Added `unconsolidated_datapoints`, `unconsolidated_comparisons`, and `unconsolidated_utility` in case the user would like to access the original data and the corresponding utility directly from the model. Differential Revision: https://www.internalfb.com/diff/D44126864?entry_point=27 fbshipit-source-id: 4d0759d95937ddc8cdea7f7d92dc5dd34c6c774e
…PairwiseGP (pytorch#1536) Summary: X-link: facebook/Ax#1536 Pull Request resolved: pytorch#1754 # Context One problem for GP models is that when evaluating points that are close, it is likely to trigger numerical issues resulted from non-PSD covariance matrix. The problem is particularly pronounced and hard to bypass when doing optimization (either BOPE or preferential BO) as we would need to repetitively compare points to the incumbent. To improve preference learning stability, we can automatically consolidate the same (or numerically similar points) into the same point. For example, with training data `datapoints = [[1, 2], [3, 4], [1, 2], [5, 6]]` and `comparisons = [[0, 1], [2, 3]]` with be turned into the consolidated `datapoints = [[1, 2], [3, 4], [5, 6]]` and `comparisons = [[0, 1], [0, 2]]`. This shouldn't lead to any changes model fitting as the likelihood remains the same. # Code changes To implement this, following changes are made - Upstreamed the `consolidate_duplicates` and related helper functions from `Ax` to `Botorch`. - Implicitly replace `datapoint` and `comparisons` in `PairwiseGP` with the consolidated ones. - Added `unconsolidated_datapoints`, `unconsolidated_comparisons`, and `unconsolidated_utility` in case the user would like to access the original data and the corresponding utility directly from the model. Differential Revision: https://www.internalfb.com/diff/D44126864?entry_point=27 fbshipit-source-id: 813ddecb1b0d3a09e5f939aa3a7a437a8f5a20ee
…PairwiseGP (pytorch#1536) Summary: X-link: facebook/Ax#1536 Pull Request resolved: pytorch#1754 # Context One problem for GP models is that when evaluating points that are close, it is likely to trigger numerical issues resulted from non-PSD covariance matrix. The problem is particularly pronounced and hard to bypass when doing optimization (either BOPE or preferential BO) as we would need to repetitively compare points to the incumbent. To improve preference learning stability, we can automatically consolidate the same (or numerically similar points) into the same point. For example, with training data `datapoints = [[1, 2], [3, 4], [1, 2], [5, 6]]` and `comparisons = [[0, 1], [2, 3]]` with be turned into the consolidated `datapoints = [[1, 2], [3, 4], [5, 6]]` and `comparisons = [[0, 1], [0, 2]]`. This shouldn't lead to any changes model fitting as the likelihood remains the same. # Code changes To implement this, following changes are made - Upstreamed the `consolidate_duplicates` and related helper functions from `Ax` to `Botorch`. - Implicitly replace `datapoint` and `comparisons` in `PairwiseGP` with the consolidated ones. - Added `unconsolidated_datapoints`, `unconsolidated_comparisons`, and `unconsolidated_utility` in case the user would like to access the original data and the corresponding utility directly from the model. Differential Revision: https://www.internalfb.com/diff/D44126864?entry_point=27 fbshipit-source-id: a52c809189366172ee3df2df301e325f1eee6c2b
This pull request was exported from Phabricator. Differential Revision: D44126864 |
…PairwiseGP (pytorch#1536) Summary: X-link: facebook/Ax#1536 Pull Request resolved: pytorch#1754 # Context One problem for GP models is that when evaluating points that are close, it is likely to trigger numerical issues resulted from non-PSD covariance matrix. The problem is particularly pronounced and hard to bypass when doing optimization (either BOPE or preferential BO) as we would need to repetitively compare points to the incumbent. To improve preference learning stability, we can automatically consolidate the same (or numerically similar points) into the same point. For example, with training data `datapoints = [[1, 2], [3, 4], [1, 2], [5, 6]]` and `comparisons = [[0, 1], [2, 3]]` with be turned into the consolidated `datapoints = [[1, 2], [3, 4], [5, 6]]` and `comparisons = [[0, 1], [0, 2]]`. This shouldn't lead to any changes model fitting as the likelihood remains the same. # Code changes To implement this, following changes are made - Upstreamed the `consolidate_duplicates` and related helper functions from `Ax` to `Botorch`. - Implicitly replace `datapoint` and `comparisons` in `PairwiseGP` with the consolidated ones. - Added `unconsolidated_datapoints`, `unconsolidated_comparisons`, and `unconsolidated_utility` in case the user would like to access the original data and the corresponding utility directly from the model. Reviewed By: Balandat Differential Revision: D44126864 fbshipit-source-id: 0ae45aa5df1008fc0f0b3416c42d1ccf1c8a119b
017bc77
to
83292a9
Compare
This pull request was exported from Phabricator. Differential Revision: D44126864 |
…PairwiseGP (pytorch#1536) Summary: X-link: facebook/Ax#1536 Pull Request resolved: pytorch#1754 # Context One problem for GP models is that when evaluating points that are close, it is likely to trigger numerical issues resulted from non-PSD covariance matrix. The problem is particularly pronounced and hard to bypass when doing optimization (either BOPE or preferential BO) as we would need to repetitively compare points to the incumbent. To improve preference learning stability, we can automatically consolidate the same (or numerically similar points) into the same point. For example, with training data `datapoints = [[1, 2], [3, 4], [1, 2], [5, 6]]` and `comparisons = [[0, 1], [2, 3]]` with be turned into the consolidated `datapoints = [[1, 2], [3, 4], [5, 6]]` and `comparisons = [[0, 1], [0, 2]]`. This shouldn't lead to any changes model fitting as the likelihood remains the same. # Code changes To implement this, following changes are made - Upstreamed the `consolidate_duplicates` and related helper functions from `Ax` to `Botorch`. - Implicitly replace `datapoint` and `comparisons` in `PairwiseGP` with the consolidated ones. - Added `unconsolidated_datapoints`, `unconsolidated_comparisons`, and `unconsolidated_utility` in case the user would like to access the original data and the corresponding utility directly from the model. Reviewed By: Balandat Differential Revision: D44126864 fbshipit-source-id: 7d0e5b73a4f2f31e91d7ce7cf36aac2b897ca18c
83292a9
to
6509bc1
Compare
This pull request was exported from Phabricator. Differential Revision: D44126864 |
…PairwiseGP (pytorch#1536) Summary: X-link: facebook/Ax#1536 Pull Request resolved: pytorch#1754 # Context One problem for GP models is that when evaluating points that are close, it is likely to trigger numerical issues resulted from non-PSD covariance matrix. The problem is particularly pronounced and hard to bypass when doing optimization (either BOPE or preferential BO) as we would need to repetitively compare points to the incumbent. To improve preference learning stability, we can automatically consolidate the same (or numerically similar points) into the same point. For example, with training data `datapoints = [[1, 2], [3, 4], [1, 2], [5, 6]]` and `comparisons = [[0, 1], [2, 3]]` with be turned into the consolidated `datapoints = [[1, 2], [3, 4], [5, 6]]` and `comparisons = [[0, 1], [0, 2]]`. This shouldn't lead to any changes model fitting as the likelihood remains the same. # Code changes To implement this, following changes are made - Upstreamed the `consolidate_duplicates` and related helper functions from `Ax` to `Botorch`. - Implicitly replace `datapoint` and `comparisons` in `PairwiseGP` with the consolidated ones. - Added `unconsolidated_datapoints`, `unconsolidated_comparisons`, and `unconsolidated_utility` in case the user would like to access the original data and the corresponding utility directly from the model. Reviewed By: Balandat Differential Revision: D44126864 fbshipit-source-id: f998dde5fe19e50281774e0cfd1c1ff62f3fcfa9
6509bc1
to
3a7b404
Compare
This pull request was exported from Phabricator. Differential Revision: D44126864 |
…PairwiseGP (pytorch#1536) Summary: X-link: facebook/Ax#1536 Pull Request resolved: pytorch#1754 # Context One problem for GP models is that when evaluating points that are close, it is likely to trigger numerical issues resulted from non-PSD covariance matrix. The problem is particularly pronounced and hard to bypass when doing optimization (either BOPE or preferential BO) as we would need to repetitively compare points to the incumbent. To improve preference learning stability, we can automatically consolidate the same (or numerically similar points) into the same point. For example, with training data `datapoints = [[1, 2], [3, 4], [1, 2], [5, 6]]` and `comparisons = [[0, 1], [2, 3]]` with be turned into the consolidated `datapoints = [[1, 2], [3, 4], [5, 6]]` and `comparisons = [[0, 1], [0, 2]]`. This shouldn't lead to any changes model fitting as the likelihood remains the same. # Code changes To implement this, following changes are made - Upstreamed the `consolidate_duplicates` and related helper functions from `Ax` to `Botorch`. - Implicitly replace `datapoint` and `comparisons` in `PairwiseGP` with the consolidated ones. - Added `unconsolidated_datapoints`, `unconsolidated_comparisons`, and `unconsolidated_utility` in case the user would like to access the original data and the corresponding utility directly from the model. Reviewed By: Balandat Differential Revision: D44126864 fbshipit-source-id: 88c86a6f4b31dba6c6efcabd68225f69b84b8e05
3a7b404
to
9396673
Compare
This pull request was exported from Phabricator. Differential Revision: D44126864 |
…PairwiseGP (pytorch#1536) Summary: X-link: facebook/Ax#1536 Pull Request resolved: pytorch#1754 # Context One problem for GP models is that when evaluating points that are close, it is likely to trigger numerical issues resulted from non-PSD covariance matrix. The problem is particularly pronounced and hard to bypass when doing optimization (either BOPE or preferential BO) as we would need to repetitively compare points to the incumbent. To improve preference learning stability, we can automatically consolidate the same (or numerically similar points) into the same point. For example, with training data `datapoints = [[1, 2], [3, 4], [1, 2], [5, 6]]` and `comparisons = [[0, 1], [2, 3]]` with be turned into the consolidated `datapoints = [[1, 2], [3, 4], [5, 6]]` and `comparisons = [[0, 1], [0, 2]]`. This shouldn't lead to any changes model fitting as the likelihood remains the same. # Code changes To implement this, following changes are made - Upstreamed the `consolidate_duplicates` and related helper functions from `Ax` to `Botorch`. - Implicitly replace `datapoint` and `comparisons` in `PairwiseGP` with the consolidated ones. - Added `unconsolidated_datapoints`, `unconsolidated_comparisons`, and `unconsolidated_utility` in case the user would like to access the original data and the corresponding utility directly from the model. Reviewed By: Balandat Differential Revision: D44126864 fbshipit-source-id: fd00be418b6fe8a7ac2ad1c35fb6901e4d76c52e
9396673
to
63397a9
Compare
…PairwiseGP (pytorch#1536) Summary: X-link: facebook/Ax#1536 Pull Request resolved: pytorch#1754 # Context One problem for GP models is that when evaluating points that are close, it is likely to trigger numerical issues resulted from non-PSD covariance matrix. The problem is particularly pronounced and hard to bypass when doing optimization (either BOPE or preferential BO) as we would need to repetitively compare points to the incumbent. To improve preference learning stability, we can automatically consolidate the same (or numerically similar points) into the same point. For example, with training data `datapoints = [[1, 2], [3, 4], [1, 2], [5, 6]]` and `comparisons = [[0, 1], [2, 3]]` with be turned into the consolidated `datapoints = [[1, 2], [3, 4], [5, 6]]` and `comparisons = [[0, 1], [0, 2]]`. This shouldn't lead to any changes model fitting as the likelihood remains the same. # Code changes To implement this, following changes are made - Upstreamed the `consolidate_duplicates` and related helper functions from `Ax` to `Botorch`. - Implicitly replace `datapoint` and `comparisons` in `PairwiseGP` with the consolidated ones. - Added `unconsolidated_datapoints`, `unconsolidated_comparisons`, and `unconsolidated_utility` in case the user would like to access the original data and the corresponding utility directly from the model. Reviewed By: Balandat Differential Revision: D44126864 fbshipit-source-id: 653614711b3474bde8aa89e3e61f929ba46c701e
This pull request was exported from Phabricator. Differential Revision: D44126864 |
63397a9
to
4bf5221
Compare
…PairwiseGP (pytorch#1536) Summary: X-link: facebook/Ax#1536 Pull Request resolved: pytorch#1754 # Context One problem for GP models is that when evaluating points that are close, it is likely to trigger numerical issues resulted from non-PSD covariance matrix. The problem is particularly pronounced and hard to bypass when doing optimization (either BOPE or preferential BO) as we would need to repetitively compare points to the incumbent. To improve preference learning stability, we can automatically consolidate the same (or numerically similar points) into the same point. For example, with training data `datapoints = [[1, 2], [3, 4], [1, 2], [5, 6]]` and `comparisons = [[0, 1], [2, 3]]` with be turned into the consolidated `datapoints = [[1, 2], [3, 4], [5, 6]]` and `comparisons = [[0, 1], [0, 2]]`. This shouldn't lead to any changes model fitting as the likelihood remains the same. # Code changes To implement this, following changes are made - Upstreamed the `consolidate_duplicates` and related helper functions from `Ax` to `Botorch`. - Implicitly replace `datapoint` and `comparisons` in `PairwiseGP` with the consolidated ones. - Added `unconsolidated_datapoints`, `unconsolidated_comparisons`, and `unconsolidated_utility` in case the user would like to access the original data and the corresponding utility directly from the model. Differential Revision: https://internalfb.com/D44126864 fbshipit-source-id: a8cba51a53a2f121337bc26a6e23ffb2839e2640
…PairwiseGP (pytorch#1536) Summary: X-link: facebook/Ax#1536 Pull Request resolved: pytorch#1754 # Context One problem for GP models is that when evaluating points that are close, it is likely to trigger numerical issues resulted from non-PSD covariance matrix. The problem is particularly pronounced and hard to bypass when doing optimization (either BOPE or preferential BO) as we would need to repetitively compare points to the incumbent. To improve preference learning stability, we can automatically consolidate the same (or numerically similar points) into the same point. For example, with training data `datapoints = [[1, 2], [3, 4], [1, 2], [5, 6]]` and `comparisons = [[0, 1], [2, 3]]` with be turned into the consolidated `datapoints = [[1, 2], [3, 4], [5, 6]]` and `comparisons = [[0, 1], [0, 2]]`. This shouldn't lead to any changes model fitting as the likelihood remains the same. # Code changes To implement this, following changes are made - Upstreamed the `consolidate_duplicates` and related helper functions from `Ax` to `Botorch`. - Implicitly replace `datapoint` and `comparisons` in `PairwiseGP` with the consolidated ones. - Added `unconsolidated_datapoints`, `unconsolidated_comparisons`, and `unconsolidated_utility` in case the user would like to access the original data and the corresponding utility directly from the model. Differential Revision: https://internalfb.com/D44126864 fbshipit-source-id: 5aed62d26f314b4a4a9c66e6e3d998ed44109dc7
This pull request has been merged in 9fd153a. |
Summary:
Context
One problem for GP models is that when evaluating points that are close, it is likely to trigger numerical issues resulted from non-PSD covariance matrix. The problem is particularly pronounced and hard to bypass when doing optimization (either BOPE or preferential BO) as we would need to repetitively compare points to the incumbent.
To improve preference learning stability, we can automatically consolidate the same (or numerically similar points) into the same point. For example, with training data
datapoints = [[1, 2], [3, 4], [1, 2], [5, 6]]
andcomparisons = [[0, 1], [2, 3]]
with be turned into the consolidateddatapoints = [[1, 2], [3, 4], [5, 6]]
andcomparisons = [[0, 1], [0, 2]]
. This shouldn't lead to any changes model fitting as the likelihood remains the same.Code changes
To implement this, following changes are made
consolidate_duplicates
and related helper functions fromAx
toBotorch
.datapoint
andcomparisons
inPairwiseGP
with the consolidated ones.unconsolidated_datapoints
,unconsolidated_comparisons
, andunconsolidated_utility
in case the user would like to access the original data and the corresponding utility directly from the model.Differential Revision: D44126864