Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Assorted PairwiseGP stability improvements #1755

Closed
wants to merge 1 commit into from

Conversation

ItsMrLin
Copy link
Contributor

Summary:
Main changes include

  • Prior update: updated prior for better model fit and better numerical stability
  • Utility heuristic initialization: previously we initialize the latent utility (i.e., the latent function value) randomly, which may lead to extreme likelihood values and unnecessarily longer optimization time. We now use comparison-winning-count-based heuristics to initialize the utility weights.
  • Ensuring covariance is PSD: despite the numerical instability of working on logit/probit scale, at the minimum, the covariance between training datapoints should be PSD by definition (e.g., when using a scaled RBF kernel). If this assumption is not hold, the accumulation of error is going to lead to many other undesirable consequences downstream. To resolve this, check and add jitter to guarantee the PSD-ness of covariance matrices.

Differential Revision: D44137937

@facebook-github-bot facebook-github-bot added CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported labels Mar 20, 2023
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D44137937

ItsMrLin added a commit to ItsMrLin/botorch that referenced this pull request Mar 22, 2023
Summary:
Pull Request resolved: pytorch#1755

Main changes include

- Prior update: updated prior for better model fit and better numerical stability
- Utility heuristic initialization: previously we initialize the latent utility (i.e., the latent function value) randomly, which may lead to extreme likelihood values and unnecessarily longer optimization time. We now use comparison-winning-count-based heuristics to initialize the utility weights.
- Ensuring covariance is PSD: despite the numerical instability of working on logit/probit scale, at the minimum, the covariance between training datapoints should be PSD by definition (e.g., when using a scaled RBF kernel). If this assumption is not hold, the accumulation of error is going to lead to many other undesirable consequences downstream. To resolve this, check and add jitter to guarantee the PSD-ness of covariance matrices.

Differential Revision: D44137937

fbshipit-source-id: a556e07aca80e4ba6ce67250fdcd744c40eae2a2
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D44137937

@codecov
Copy link

codecov bot commented Mar 22, 2023

Codecov Report

Merging #1755 (464137b) into main (9fd153a) will not change coverage.
The diff coverage is 100.00%.

❗ Current head 464137b differs from pull request most recent head 1aab762. Consider uploading reports for the commit 1aab762 to get more accurate results

@@            Coverage Diff            @@
##              main     #1755   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files          170       170           
  Lines        14695     14717   +22     
=========================================
+ Hits         14695     14717   +22     
Impacted Files Coverage Δ
botorch/models/pairwise_gp.py 100.00% <100.00%> (ø)

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

ItsMrLin added a commit to ItsMrLin/botorch that referenced this pull request Mar 22, 2023
Summary:
Pull Request resolved: pytorch#1755

Main changes include

- Prior update: updated prior for better model fit and better numerical stability
- Utility heuristic initialization: previously we initialize the latent utility (i.e., the latent function value) randomly, which may lead to extreme likelihood values and unnecessarily longer optimization time. We now use comparison-winning-count-based heuristics to initialize the utility weights.
- Ensuring covariance is PSD: despite the numerical instability of working on logit/probit scale, at the minimum, the covariance between training datapoints should be PSD by definition (e.g., when using a scaled RBF kernel). If this assumption is not hold, the accumulation of error is going to lead to many other undesirable consequences downstream. To resolve this, check and add jitter to guarantee the PSD-ness of covariance matrices.

Differential Revision: D44137937

fbshipit-source-id: 031a8bd1a761107d5f17c2567c505f1c15a74a34
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D44137937

ItsMrLin added a commit to ItsMrLin/botorch that referenced this pull request Mar 22, 2023
Summary:
Pull Request resolved: pytorch#1755

Main changes include

- Prior update: updated prior for better model fit and better numerical stability
- Utility heuristic initialization: previously we initialize the latent utility (i.e., the latent function value) randomly, which may lead to extreme likelihood values and unnecessarily longer optimization time. We now use comparison-winning-count-based heuristics to initialize the utility weights.
- Ensuring covariance is PSD: despite the numerical instability of working on logit/probit scale, at the minimum, the covariance between training datapoints should be PSD by definition (e.g., when using a scaled RBF kernel). If this assumption is not hold, the accumulation of error is going to lead to many other undesirable consequences downstream. To resolve this, check and add jitter to guarantee the PSD-ness of covariance matrices.

Differential Revision: D44137937

fbshipit-source-id: a69358e8b50e2d1de2e23f3a91a22151f75c1d4e
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D44137937

ItsMrLin added a commit to ItsMrLin/botorch that referenced this pull request Mar 22, 2023
Summary:
Pull Request resolved: pytorch#1755

Main changes include

- Prior update: updated prior for better model fit and better numerical stability
- Utility heuristic initialization: previously we initialize the latent utility (i.e., the latent function value) randomly, which may lead to extreme likelihood values and unnecessarily longer optimization time. We now use comparison-winning-count-based heuristics to initialize the utility weights.
- Ensuring covariance is PSD: despite the numerical instability of working on logit/probit scale, at the minimum, the covariance between training datapoints should be PSD by definition (e.g., when using a scaled RBF kernel). If this assumption is not hold, the accumulation of error is going to lead to many other undesirable consequences downstream. To resolve this, check and add jitter to guarantee the PSD-ness of covariance matrices.

Differential Revision: D44137937

fbshipit-source-id: 7e748e132a7d7faed4c440d684b0177290863bc4
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D44137937

ItsMrLin added a commit to ItsMrLin/botorch that referenced this pull request Mar 22, 2023
Summary:
Pull Request resolved: pytorch#1755

Main changes include

- Prior update: updated prior for better model fit and better numerical stability
- Utility heuristic initialization: previously we initialize the latent utility (i.e., the latent function value) randomly, which may lead to extreme likelihood values and unnecessarily longer optimization time. We now use comparison-winning-count-based heuristics to initialize the utility weights.
- Ensuring covariance is PSD: despite the numerical instability of working on logit/probit scale, at the minimum, the covariance between training datapoints should be PSD by definition (e.g., when using a scaled RBF kernel). If this assumption is not hold, the accumulation of error is going to lead to many other undesirable consequences downstream. To resolve this, check and add jitter to guarantee the PSD-ness of covariance matrices.

Differential Revision: D44137937

fbshipit-source-id: 6b4de70a234b360a31c717605cadcd74ae59318c
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D44137937

ItsMrLin added a commit to ItsMrLin/botorch that referenced this pull request Mar 22, 2023
Summary:
Pull Request resolved: pytorch#1755

Main changes include

- Prior update: updated prior for better model fit and better numerical stability
- Utility heuristic initialization: previously we initialize the latent utility (i.e., the latent function value) randomly, which may lead to extreme likelihood values and unnecessarily longer optimization time. We now use comparison-winning-count-based heuristics to initialize the utility weights.
- Ensuring covariance is PSD: despite the numerical instability of working on logit/probit scale, at the minimum, the covariance between training datapoints should be PSD by definition (e.g., when using a scaled RBF kernel). If this assumption is not hold, the accumulation of error is going to lead to many other undesirable consequences downstream. To resolve this, check and add jitter to guarantee the PSD-ness of covariance matrices.

Differential Revision: D44137937

fbshipit-source-id: 49a874426895e6188f8f1ec3a9a6c73437bc6607
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D44137937

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D44137937

ItsMrLin added a commit to ItsMrLin/botorch that referenced this pull request Mar 22, 2023
Summary:
Pull Request resolved: pytorch#1755

Main changes include

- Prior update: updated prior for better model fit and better numerical stability
- Utility heuristic initialization: previously we initialize the latent utility (i.e., the latent function value) randomly, which may lead to extreme likelihood values and unnecessarily longer optimization time. We now use comparison-winning-count-based heuristics to initialize the utility weights.
- Ensuring covariance is PSD: despite the numerical instability of working on logit/probit scale, at the minimum, the covariance between training datapoints should be PSD by definition (e.g., when using a scaled RBF kernel). If this assumption is not hold, the accumulation of error is going to lead to many other undesirable consequences downstream. To resolve this, check and add jitter to guarantee the PSD-ness of covariance matrices.

Reviewed By: Ryan-Rhys

Differential Revision: D44137937

fbshipit-source-id: 5080dc3885be5b0375a15dfb45a6f1180e201377
Summary:
Pull Request resolved: pytorch#1755

Main changes include

- Prior update: updated prior for better model fit and better numerical stability
- Utility heuristic initialization: previously we initialize the latent utility (i.e., the latent function value) randomly, which may lead to extreme likelihood values and unnecessarily longer optimization time. We now use comparison-winning-count-based heuristics to initialize the utility weights.
- Ensuring covariance is PSD: despite the numerical instability of working on logit/probit scale, at the minimum, the covariance between training datapoints should be PSD by definition (e.g., when using a scaled RBF kernel). If this assumption is not hold, the accumulation of error is going to lead to many other undesirable consequences downstream. To resolve this, check and add jitter to guarantee the PSD-ness of covariance matrices.

Reviewed By: Ryan-Rhys

Differential Revision: D44137937

fbshipit-source-id: 86fa9e623d7a2e021e4fde64217914108e6ece8d
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D44137937

ItsMrLin added a commit to ItsMrLin/botorch that referenced this pull request Mar 22, 2023
Summary:
Pull Request resolved: pytorch#1755

Main changes include

- Prior update: updated prior for better model fit and better numerical stability
- Utility heuristic initialization: previously we initialize the latent utility (i.e., the latent function value) randomly, which may lead to extreme likelihood values and unnecessarily longer optimization time. We now use comparison-winning-count-based heuristics to initialize the utility weights.
- Ensuring covariance is PSD: despite the numerical instability of working on logit/probit scale, at the minimum, the covariance between training datapoints should be PSD by definition (e.g., when using a scaled RBF kernel). If this assumption is not hold, the accumulation of error is going to lead to many other undesirable consequences downstream. To resolve this, check and add jitter to guarantee the PSD-ness of covariance matrices.

Differential Revision: https://internalfb.com/D44137937

fbshipit-source-id: a989e960bf420f707272f9077ff3154da7eb2d56
ItsMrLin added a commit to ItsMrLin/botorch that referenced this pull request Mar 22, 2023
Summary:
Pull Request resolved: pytorch#1755

Main changes include

- Prior update: updated prior for better model fit and better numerical stability
- Utility heuristic initialization: previously we initialize the latent utility (i.e., the latent function value) randomly, which may lead to extreme likelihood values and unnecessarily longer optimization time. We now use comparison-winning-count-based heuristics to initialize the utility weights.
- Ensuring covariance is PSD: despite the numerical instability of working on logit/probit scale, at the minimum, the covariance between training datapoints should be PSD by definition (e.g., when using a scaled RBF kernel). If this assumption is not hold, the accumulation of error is going to lead to many other undesirable consequences downstream. To resolve this, check and add jitter to guarantee the PSD-ness of covariance matrices.

Differential Revision: https://internalfb.com/D44137937

fbshipit-source-id: 4df9a456f9900c7c27a50b04b87aa060ab4d2cb0
ItsMrLin added a commit to ItsMrLin/botorch that referenced this pull request Mar 22, 2023
Summary:
Pull Request resolved: pytorch#1755

Main changes include

- Prior update: updated prior for better model fit and better numerical stability
- Utility heuristic initialization: previously we initialize the latent utility (i.e., the latent function value) randomly, which may lead to extreme likelihood values and unnecessarily longer optimization time. We now use comparison-winning-count-based heuristics to initialize the utility weights.
- Ensuring covariance is PSD: despite the numerical instability of working on logit/probit scale, at the minimum, the covariance between training datapoints should be PSD by definition (e.g., when using a scaled RBF kernel). If this assumption is not hold, the accumulation of error is going to lead to many other undesirable consequences downstream. To resolve this, check and add jitter to guarantee the PSD-ness of covariance matrices.

Differential Revision: https://internalfb.com/D44137937

fbshipit-source-id: 7e758031eaf6a489c7610846eec429deefe80be3
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in f596e71.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants