Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose timeout option in higher-level optimziation wrappers #1598

Closed
wants to merge 1 commit into from

Conversation

Balandat
Copy link
Contributor

Summary: Now that we have the ability to time out the optimization in scipy.optimize.minimize at a lower-level, we can expose it also in the higher-level optimization wrappers.

Differential Revision: D42254406

@facebook-github-bot facebook-github-bot added CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported labels Dec 28, 2022
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D42254406

Balandat added a commit to Balandat/botorch that referenced this pull request Dec 28, 2022
…1598)

Summary:
Pull Request resolved: pytorch#1598

Now that we have the ability to time out the optimization in `scipy.optimize.minimize` at a lower-level, we can expose it also in the higher-level optimization wrappers.

Differential Revision: D42254406

fbshipit-source-id: 09680b257172280f991460933c6e76090622b320
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D42254406

Balandat added a commit to Balandat/botorch that referenced this pull request Dec 28, 2022
…1598)

Summary:
Pull Request resolved: pytorch#1598

Now that we have the ability to time out the optimization in `scipy.optimize.minimize` at a lower-level, we can expose it also in the higher-level optimization wrappers.

Differential Revision: D42254406

fbshipit-source-id: bc2266f17c39f9f1783fecbe736f68d0abc5d65d
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D42254406

@codecov
Copy link

codecov bot commented Dec 28, 2022

Codecov Report

Merging #1598 (4e79434) into main (d23c492) will not change coverage.
The diff coverage is 100.00%.

@@            Coverage Diff            @@
##              main     #1598   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files          154       154           
  Lines        13754     13775   +21     
=========================================
+ Hits         13754     13775   +21     
Impacted Files Coverage Δ
botorch/optim/fit.py 100.00% <ø> (ø)
botorch/fit.py 100.00% <100.00%> (ø)
botorch/optim/core.py 100.00% <100.00%> (ø)
botorch/optim/optimize.py 100.00% <100.00%> (ø)

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

Balandat added a commit to Balandat/botorch that referenced this pull request Dec 29, 2022
…1598)

Summary:
Pull Request resolved: pytorch#1598

Now that we have the ability to time out the optimization in `scipy.optimize.minimize` at a lower-level, we can expose it also in the higher-level optimization wrappers.

Differential Revision: D42254406

fbshipit-source-id: 7ad81dde78fc6ef2dd1af7de44c63664d5f063b8
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D42254406

Balandat added a commit to Balandat/botorch that referenced this pull request Dec 29, 2022
…1598)

Summary:
Pull Request resolved: pytorch#1598

Now that we have the ability to time out the optimization in `scipy.optimize.minimize` at a lower-level, we can expose it also in the higher-level optimization wrappers.

Differential Revision: D42254406

fbshipit-source-id: 24521e214e9548dbfea43320a20e56ec32a5b76b
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D42254406

Balandat added a commit to Balandat/botorch that referenced this pull request Dec 29, 2022
…1598)

Summary:
Pull Request resolved: pytorch#1598

Now that we have the ability to time out the optimization in `scipy.optimize.minimize` at a lower-level, we can expose it also in the higher-level optimization wrappers.

Differential Revision: D42254406

fbshipit-source-id: ff2d48549917cddb1a737f3be90186667fbc1476
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D42254406

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D42254406

Balandat added a commit to Balandat/botorch that referenced this pull request Dec 30, 2022
…1598)

Summary:
Pull Request resolved: pytorch#1598

Now that we have the ability to time out the optimization in `scipy.optimize.minimize` at a lower-level, we can expose it also in the higher-level optimization wrappers.

Reviewed By: esantorella

Differential Revision: D42254406

fbshipit-source-id: aeaac9cee7fb52ecda1c11df123cd3419476a48c
Balandat added a commit to Balandat/Ax that referenced this pull request Dec 30, 2022
…#1598)

Summary:
X-link: pytorch/botorch#1598

Now that we have the ability to time out the optimization in `scipy.optimize.minimize` at a lower-level, we can expose it also in the higher-level optimization wrappers.

Reviewed By: esantorella

Differential Revision: D42254406

fbshipit-source-id: 544dbc64abae0adf8693b08ffdb060c7ee7a895e
Balandat added a commit to Balandat/botorch that referenced this pull request Dec 30, 2022
…1598)

Summary:
Pull Request resolved: pytorch#1598

Now that we have the ability to time out the optimization in `scipy.optimize.minimize` at a lower-level, we can expose it also in the higher-level optimization wrappers.

Reviewed By: esantorella

Differential Revision: D42254406

fbshipit-source-id: a930515861a75d4f4f67202e0d52d83eb92e400b
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D42254406

…1353)

Summary:
X-link: facebook/Ax#1353

Pull Request resolved: pytorch#1598

Now that we have the ability to time out the optimization in `scipy.optimize.minimize` at a lower-level, we can expose it also in the higher-level optimization wrappers.

Reviewed By: esantorella

Differential Revision: D42254406

fbshipit-source-id: 251c66a1e57f378e952b103300c1597590577bb0
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D42254406

Balandat added a commit to Balandat/Ax that referenced this pull request Dec 30, 2022
…#1353)

Summary:
Pull Request resolved: facebook#1353

X-link: pytorch/botorch#1598

Now that we have the ability to time out the optimization in `scipy.optimize.minimize` at a lower-level, we can expose it also in the higher-level optimization wrappers.

Reviewed By: esantorella

Differential Revision: D42254406

fbshipit-source-id: 75dc6e29796a8d94dcf639ca3b30220a23121341
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in feed9f7.

facebook-github-bot pushed a commit to facebook/Ax that referenced this pull request Dec 30, 2022
Summary:
Pull Request resolved: #1353

X-link: pytorch/botorch#1598

Now that we have the ability to time out the optimization in `scipy.optimize.minimize` at a lower-level, we can expose it also in the higher-level optimization wrappers.

Reviewed By: esantorella

Differential Revision: D42254406

fbshipit-source-id: 740d69a965c9eb373bb22e9c8a7213a13abb9dcc
@Balandat Balandat deleted the export-D42254406 branch January 4, 2023 00:24
esantorella added a commit to esantorella/botorch that referenced this pull request Apr 26, 2023
…pytorch#1598)

Summary:
X-link: facebook/Ax#1598

Pull Request resolved: pytorch#1808

`optimize_acqf` calls `_optimize_acqf_sequential_q` when `sequential=True` and `q > 1`. We had been doing input validation for `_optimize_acqf_sequential` even when it was not called, in the `q=1` case. This unnecessary check became a problem when `sequential=True` became a default for MBM in facebook/Ax#1585 , breaking Ax benchmarks.

This PR moves checks for sequential optimization to `_optimize_acqf_sequential_q`, so they will only happen if `_optimize_acqf_sequential_q` is called.

Reviewed By: saitcakmak

Differential Revision: D45324522

fbshipit-source-id: e5ea7e348eb0ae479e0345ec5254ba5a71642b2a
esantorella added a commit to esantorella/botorch that referenced this pull request Apr 26, 2023
…pytorch#1598)

Summary:
X-link: facebook/Ax#1598

Pull Request resolved: pytorch#1808

`optimize_acqf` calls `_optimize_acqf_sequential_q` when `sequential=True` and `q > 1`. We had been doing input validation for `_optimize_acqf_sequential` even when it was not called, in the `q=1` case. This unnecessary check became a problem when `sequential=True` became a default for MBM in facebook/Ax#1585 , breaking Ax benchmarks.

This PR moves checks for sequential optimization to `_optimize_acqf_sequential_q`, so they will only happen if `_optimize_acqf_sequential_q` is called.

Reviewed By: saitcakmak

Differential Revision: D45324522

fbshipit-source-id: e9114843acd122c905bd72dde037ba918348549e
facebook-github-bot pushed a commit that referenced this pull request Apr 27, 2023
…#1598)

Summary:
X-link: facebook/Ax#1598

Pull Request resolved: #1808

`optimize_acqf` calls `_optimize_acqf_sequential_q` when `sequential=True` and `q > 1`. We had been doing input validation for `_optimize_acqf_sequential` even when it was not called, in the `q=1` case. This unnecessary check became a problem when `sequential=True` became a default for MBM in facebook/Ax#1585 , breaking Ax benchmarks.

This PR moves checks for sequential optimization to `_optimize_acqf_sequential_q`, so they will only happen if `_optimize_acqf_sequential_q` is called.

Reviewed By: saitcakmak

Differential Revision: D45324522

fbshipit-source-id: 1757abddcc1e3480c687800605b972c7ae603f8b
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants