-
Notifications
You must be signed in to change notification settings - Fork 415
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose timeout option in higher-level optimziation wrappers #1598
Conversation
This pull request was exported from Phabricator. Differential Revision: D42254406 |
…1598) Summary: Pull Request resolved: pytorch#1598 Now that we have the ability to time out the optimization in `scipy.optimize.minimize` at a lower-level, we can expose it also in the higher-level optimization wrappers. Differential Revision: D42254406 fbshipit-source-id: 09680b257172280f991460933c6e76090622b320
This pull request was exported from Phabricator. Differential Revision: D42254406 |
14e04b3
to
6bd5ca4
Compare
…1598) Summary: Pull Request resolved: pytorch#1598 Now that we have the ability to time out the optimization in `scipy.optimize.minimize` at a lower-level, we can expose it also in the higher-level optimization wrappers. Differential Revision: D42254406 fbshipit-source-id: bc2266f17c39f9f1783fecbe736f68d0abc5d65d
6bd5ca4
to
2e92d26
Compare
This pull request was exported from Phabricator. Differential Revision: D42254406 |
Codecov Report
@@ Coverage Diff @@
## main #1598 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 154 154
Lines 13754 13775 +21
=========================================
+ Hits 13754 13775 +21
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
…1598) Summary: Pull Request resolved: pytorch#1598 Now that we have the ability to time out the optimization in `scipy.optimize.minimize` at a lower-level, we can expose it also in the higher-level optimization wrappers. Differential Revision: D42254406 fbshipit-source-id: 7ad81dde78fc6ef2dd1af7de44c63664d5f063b8
2e92d26
to
e768134
Compare
This pull request was exported from Phabricator. Differential Revision: D42254406 |
…1598) Summary: Pull Request resolved: pytorch#1598 Now that we have the ability to time out the optimization in `scipy.optimize.minimize` at a lower-level, we can expose it also in the higher-level optimization wrappers. Differential Revision: D42254406 fbshipit-source-id: 24521e214e9548dbfea43320a20e56ec32a5b76b
e768134
to
a900070
Compare
This pull request was exported from Phabricator. Differential Revision: D42254406 |
…1598) Summary: Pull Request resolved: pytorch#1598 Now that we have the ability to time out the optimization in `scipy.optimize.minimize` at a lower-level, we can expose it also in the higher-level optimization wrappers. Differential Revision: D42254406 fbshipit-source-id: ff2d48549917cddb1a737f3be90186667fbc1476
a900070
to
f142ee1
Compare
This pull request was exported from Phabricator. Differential Revision: D42254406 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D42254406 |
…1598) Summary: Pull Request resolved: pytorch#1598 Now that we have the ability to time out the optimization in `scipy.optimize.minimize` at a lower-level, we can expose it also in the higher-level optimization wrappers. Reviewed By: esantorella Differential Revision: D42254406 fbshipit-source-id: aeaac9cee7fb52ecda1c11df123cd3419476a48c
f142ee1
to
7519944
Compare
…#1598) Summary: X-link: pytorch/botorch#1598 Now that we have the ability to time out the optimization in `scipy.optimize.minimize` at a lower-level, we can expose it also in the higher-level optimization wrappers. Reviewed By: esantorella Differential Revision: D42254406 fbshipit-source-id: 544dbc64abae0adf8693b08ffdb060c7ee7a895e
…1598) Summary: Pull Request resolved: pytorch#1598 Now that we have the ability to time out the optimization in `scipy.optimize.minimize` at a lower-level, we can expose it also in the higher-level optimization wrappers. Reviewed By: esantorella Differential Revision: D42254406 fbshipit-source-id: a930515861a75d4f4f67202e0d52d83eb92e400b
7519944
to
53df917
Compare
This pull request was exported from Phabricator. Differential Revision: D42254406 |
…1353) Summary: X-link: facebook/Ax#1353 Pull Request resolved: pytorch#1598 Now that we have the ability to time out the optimization in `scipy.optimize.minimize` at a lower-level, we can expose it also in the higher-level optimization wrappers. Reviewed By: esantorella Differential Revision: D42254406 fbshipit-source-id: 251c66a1e57f378e952b103300c1597590577bb0
This pull request was exported from Phabricator. Differential Revision: D42254406 |
53df917
to
4e79434
Compare
…#1353) Summary: Pull Request resolved: facebook#1353 X-link: pytorch/botorch#1598 Now that we have the ability to time out the optimization in `scipy.optimize.minimize` at a lower-level, we can expose it also in the higher-level optimization wrappers. Reviewed By: esantorella Differential Revision: D42254406 fbshipit-source-id: 75dc6e29796a8d94dcf639ca3b30220a23121341
This pull request has been merged in feed9f7. |
Summary: Pull Request resolved: #1353 X-link: pytorch/botorch#1598 Now that we have the ability to time out the optimization in `scipy.optimize.minimize` at a lower-level, we can expose it also in the higher-level optimization wrappers. Reviewed By: esantorella Differential Revision: D42254406 fbshipit-source-id: 740d69a965c9eb373bb22e9c8a7213a13abb9dcc
…pytorch#1598) Summary: X-link: facebook/Ax#1598 Pull Request resolved: pytorch#1808 `optimize_acqf` calls `_optimize_acqf_sequential_q` when `sequential=True` and `q > 1`. We had been doing input validation for `_optimize_acqf_sequential` even when it was not called, in the `q=1` case. This unnecessary check became a problem when `sequential=True` became a default for MBM in facebook/Ax#1585 , breaking Ax benchmarks. This PR moves checks for sequential optimization to `_optimize_acqf_sequential_q`, so they will only happen if `_optimize_acqf_sequential_q` is called. Reviewed By: saitcakmak Differential Revision: D45324522 fbshipit-source-id: e5ea7e348eb0ae479e0345ec5254ba5a71642b2a
…pytorch#1598) Summary: X-link: facebook/Ax#1598 Pull Request resolved: pytorch#1808 `optimize_acqf` calls `_optimize_acqf_sequential_q` when `sequential=True` and `q > 1`. We had been doing input validation for `_optimize_acqf_sequential` even when it was not called, in the `q=1` case. This unnecessary check became a problem when `sequential=True` became a default for MBM in facebook/Ax#1585 , breaking Ax benchmarks. This PR moves checks for sequential optimization to `_optimize_acqf_sequential_q`, so they will only happen if `_optimize_acqf_sequential_q` is called. Reviewed By: saitcakmak Differential Revision: D45324522 fbshipit-source-id: e9114843acd122c905bd72dde037ba918348549e
…#1598) Summary: X-link: facebook/Ax#1598 Pull Request resolved: #1808 `optimize_acqf` calls `_optimize_acqf_sequential_q` when `sequential=True` and `q > 1`. We had been doing input validation for `_optimize_acqf_sequential` even when it was not called, in the `q=1` case. This unnecessary check became a problem when `sequential=True` became a default for MBM in facebook/Ax#1585 , breaking Ax benchmarks. This PR moves checks for sequential optimization to `_optimize_acqf_sequential_q`, so they will only happen if `_optimize_acqf_sequential_q` is called. Reviewed By: saitcakmak Differential Revision: D45324522 fbshipit-source-id: 1757abddcc1e3480c687800605b972c7ae603f8b
Summary: Now that we have the ability to time out the optimization in
scipy.optimize.minimize
at a lower-level, we can expose it also in the higher-level optimization wrappers.Differential Revision: D42254406