Skip to content

Commit

Permalink
Update polytope sampling code and add thinning capability (pytorch#2358)
Browse files Browse the repository at this point in the history
Summary:
This set of changes does the following:
* adds an `n_thinning` argument to `sample_polytope` and `HitAndRunPolytopeSampler`; changes the defaults for `HitAndRunPolytopeSampler` args to `n_burnin=200` and `n_thinning=20`
* Changes `HitAndRunPolytopeSampler` to take the `seed` arg in its constructor, and removes the arg from the `draw()` method (the method on the base class is adjusted accordingly). The resulting behavior is that if a `HitAndRunPolytopeSampler` is instantiated with the same args and seed, then the sequence of `draw()`s will be deterministic. `DelaunayPolytopeSampler` is stateless, and so retains its existing behavior.
* normalizes the (inequality and equality) constraints in `HitAndRunPolytopeSampler` to avoid the same issue as pytorch#1225. If `bounds` are note provided, emits a warning that this cannot be performed (doing this would require vertex enumeration of the constraint polytope, which is NP-hard and too costly).
* introduces `normalize_dense_linear_constraints` to normalize constraint given in dense format to the unit cube
* removes `normalize_linear_constraint`; `normalize_sparse_linear_constraints` is to be used instead
* simplifies some of the testing code

Note: This change is in preparation for fixing facebook/Ax#2373


Test Plan: Ran a stress test to make sure this doesn't cause flaky tests: https://www.internalfb.com/intern/testinfra/testconsole/testrun/3940649908470083/

Differential Revision: D58068753

Pulled By: Balandat
  • Loading branch information
Balandat authored and facebook-github-bot committed Jun 5, 2024
1 parent 8248b78 commit 1d04d31
Show file tree
Hide file tree
Showing 4 changed files with 240 additions and 135 deletions.
12 changes: 6 additions & 6 deletions botorch/optim/initializers.py
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,7 @@ def sample_q_batches_from_polytope(
q: int,
bounds: Tensor,
n_burnin: int,
thinning: int,
n_thinning: int,
seed: int,
inequality_constraints: Optional[List[Tuple[Tensor, Tensor, float]]] = None,
equality_constraints: Optional[List[Tuple[Tensor, Tensor, float]]] = None,
Expand All @@ -192,8 +192,8 @@ def sample_q_batches_from_polytope(
q: Number of samples per q-batch
bounds: A `2 x d` tensor of lower and upper bounds for each column of `X`.
n_burnin: The number of burn-in samples for the Markov chain sampler.
thinning: The amount of thinning (number of steps to take between
returning samples).
n_thinning: The amount of thinning. The sampler will return every
`n_thinning` sample (after burn-in).
seed: The random seed.
inequality_constraints: A list of tuples (indices, coefficients, rhs),
with each tuple encoding an inequality constraint of the form
Expand Down Expand Up @@ -225,7 +225,7 @@ def sample_q_batches_from_polytope(
),
seed=seed,
n_burnin=n_burnin,
thinning=thinning * q,
n_thinning=n_thinning * q,
)
else:
samples = get_polytope_samples(
Expand All @@ -235,7 +235,7 @@ def sample_q_batches_from_polytope(
equality_constraints=equality_constraints,
seed=seed,
n_burnin=n_burnin,
thinning=thinning,
n_thinning=n_thinning,
)
return samples.view(n, q, -1).cpu()

Expand Down Expand Up @@ -367,7 +367,7 @@ def gen_batch_initial_conditions(
q=q,
bounds=bounds,
n_burnin=options.get("n_burnin", 10000),
thinning=options.get("thinning", 32),
n_thinning=options.get("n_thinning", 32),
seed=seed,
equality_constraints=equality_constraints,
inequality_constraints=inequality_constraints,
Expand Down
Loading

0 comments on commit 1d04d31

Please sign in to comment.