-
Notifications
You must be signed in to change notification settings - Fork 413
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update polytope sampling code and add thinning capability #2358
Conversation
@Balandat has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Addresses the issues reported in facebook#2373 by using the updated `HitAndRunPolytopeSampler` from pytorch/botorch#2358 (this will require this to land in nightly botorch before tests can pass).
5bf2715
to
d57fe11
Compare
@Balandat has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2358 +/- ##
=======================================
Coverage 99.98% 99.98%
=======================================
Files 189 189
Lines 16548 16579 +31
=======================================
+ Hits 16545 16576 +31
Misses 3 3 ☔ View full report in Codecov by Sentry. |
d57fe11
to
5e1e92c
Compare
@Balandat has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
1 similar comment
@Balandat has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
1cd27b0
to
af00b17
Compare
@Balandat has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Summary: This set of changes does the following: * adds an `n_thinning` argument to `sample_polytope` and `HitAndRunPolytopeSampler`; changes the defaults for `HitAndRunPolytopeSampler` args to `n_burnin=200` and `n_thinning=20` * Changes `HitAndRunPolytopeSampler` to take the `seed` arg in its constructor, and removes the arg from the `draw()` method (the method on the base class is adjusted accordingly). The resulting behavior is that if a `HitAndRunPolytopeSampler` is instantiated with the same args and seed, then the sequence of `draw()`s will be deterministic. `DelaunayPolytopeSampler` is stateless, and so retains its existing behavior. * normalizes the (inequality and equality) constraints in `HitAndRunPolytopeSampler` to avoid the same issue as pytorch#1225. If `bounds` are note provided, emits a warning that this cannot be performed (doing this would require vertex enumeration of the constraint polytope, which is NP-hard and too costly). * introduces `normalize_dense_linear_constraints` to normalize constraint given in dense format to the unit cube * removes `normalize_linear_constraint`; `normalize_sparse_linear_constraints` is to be used instead * simplifies some of the testing code Note: This change is in preparation for fixing facebook/Ax#2373 Test Plan: Ran a stress test to make sure this doesn't cause flaky tests: https://www.internalfb.com/intern/testinfra/testconsole/testrun/3940649908470083/ Differential Revision: D58068753 Pulled By: Balandat
af00b17
to
1d04d31
Compare
This pull request was exported from Phabricator. Differential Revision: D58068753 |
Summary: This set of changes does the following: * adds an `n_thinning` argument to `sample_polytope` and `HitAndRunPolytopeSampler`; changes the defaults for `HitAndRunPolytopeSampler` args to `n_burnin=200` and `n_thinning=20` * Changes `HitAndRunPolytopeSampler` to take the `seed` arg in its constructor, and removes the arg from the `draw()` method (the method on the base class is adjusted accordingly). The resulting behavior is that if a `HitAndRunPolytopeSampler` is instantiated with the same args and seed, then the sequence of `draw()`s will be deterministic. `DelaunayPolytopeSampler` is stateless, and so retains its existing behavior. * normalizes the (inequality and equality) constraints in `HitAndRunPolytopeSampler` to avoid the same issue as pytorch#1225. If `bounds` are note provided, emits a warning that this cannot be performed (doing this would require vertex enumeration of the constraint polytope, which is NP-hard and too costly). * introduces `normalize_dense_linear_constraints` to normalize constraint given in dense format to the unit cube * removes `normalize_linear_constraint`; `normalize_sparse_linear_constraints` is to be used instead * simplifies some of the testing code Note: This change is in preparation for fixing facebook/Ax#2373 Test Plan: Ran a stress test to make sure this doesn't cause flaky tests: https://www.internalfb.com/intern/testinfra/testconsole/testrun/3940649908470083/ Differential Revision: D58068753 Pulled By: Balandat
1d04d31
to
49782f3
Compare
This pull request was exported from Phabricator. Differential Revision: D58068753 |
…book#2492) Summary: Addresses the issues reported in facebook#2373 by using the updated `HitAndRunPolytopeSampler` from pytorch/botorch#2358 (this will require this to land in nightly botorch before tests can pass). Differential Revision: D58070591 Pulled By: Balandat
@@ -367,7 +368,7 @@ def gen_batch_initial_conditions( | |||
q=q, | |||
bounds=bounds, | |||
n_burnin=options.get("n_burnin", 10000), | |||
thinning=options.get("thinning", 32), | |||
n_thinning=options.get("n_thinning", 32), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If someone passes in thinning
, I am guessing it'll get silently ignored. Would something like this offer better backwards compatibility? Ideally we'd just error out if we got some unsupported option cc @esantorella
n_thinning=options.get("n_thinning", 32), | |
n_thinning=options.get("n_thinning", options.get("thinning", 32)), |
import warnings | ||
|
||
from abc import ABC, abstractmethod |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
import warnings | |
from abc import ABC, abstractmethod | |
import warnings | |
from abc import ABC, abstractmethod |
Summary: Addresses the issues reported in #2373 by using the updated `HitAndRunPolytopeSampler` from pytorch/botorch#2358 (this will require this to land in nightly botorch before tests can pass). Pull Request resolved: #2492 Reviewed By: saitcakmak Differential Revision: D58070591 Pulled By: Balandat fbshipit-source-id: f578674bf476ff677314cf4f71283c7ded660943
This set of changes does the following:
n_thinning
argument tosample_polytope
andHitAndRunPolytopeSampler
; changes the defaults forHitAndRunPolytopeSampler
args ton_burnin=200
andn_thinning=20
HitAndRunPolytopeSampler
to take theseed
arg in its constructor, and removes the arg from thedraw()
method (the method on the base class is adjusted accordingly). The resulting behavior is that if aHitAndRunPolytopeSampler
is instantiated with the same args and seed, then the sequence ofdraw()
s will be deterministic.DelaunayPolytopeSampler
is stateless, and so retains its existing behavior.HitAndRunPolytopeSampler
to avoid the same issue as Problem with Polytope Sampler #1225. Ifbounds
are note provided, emits a warning that this cannot be performed (doing this would require vertex enumeration of the constraint polytope, which is NP-hard and too costly).normalize_dense_linear_constraints
to normalize constraint given in dense format to the unit cubenormalize_linear_constraint
;normalize_sparse_linear_constraints
is to be used insteadNote: This change is in preparation for fixing facebook/Ax#2373