Skip to content

Commit

Permalink
Changelog for 0.7.3 (#1493)
Browse files Browse the repository at this point in the history
Summary:
 ---

Pull Request resolved: #1493

Reviewed By: esantorella

Differential Revision: D41192689

Pulled By: saitcakmak

fbshipit-source-id: 2296e98d97e9faf110f64bbba9845fbbd6c93e51
  • Loading branch information
saitcakmak authored and facebook-github-bot committed Nov 10, 2022
1 parent f76979d commit 7eada74
Showing 1 changed file with 35 additions and 0 deletions.
35 changes: 35 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,41 @@

The release log for BoTorch.

## [0.7.3] - Nov 10, 2022

### Highlights
* #1454 fixes a critical bug that affected multi-output `BatchedMultiOutputGPyTorchModel`s that were using a `Normalize` or `InputStandardize` input transform and trained using `fit_gpytorch_model/mll` with `sequential=True` (which was the default until 0.7.3). The input transform buffers would be reset after model training, leading to the model being trained on normalized input data but evaluated on raw inputs. This bug had been affecting model fits since the 0.6.5 release.
* #1479 changes the inheritance structure of `Model`s in a backwards-incompatible way. If your code relies on `isinstance` checks with BoTorch `Model`s, especially `SingleTaskGP`, you should revisit these checks to make sure they still work as expected.

#### Compatibility
* Require linear_operator == 0.2.0 (#1491).

#### New Features
* Introduce `bvn`, `MVNXPB`, `TruncatedMultivariateNormal`, and `UnifiedSkewNormal` classes / methods (#1394, #1408).
* Introduce `AffineInputTransform` (#1461).
* Introduce a `subset_transform` decorator to consolidate subsetting of inputs in input transforms (#1468).

#### Other Changes
* Add a warning when using float dtype (#1193).
* Let Pyre know that `AcquisitionFunction.model` is a `Model` (#1216).
* Remove custom `BlockDiagLazyTensor` logic when using `Standardize` (#1414).
* Expose `_aug_batch_shape` in `SaasFullyBayesianSingleTaskGP` (#1448).
* Adjust `PairwiseGP` `ScaleKernel` prior (#1460).
* Pull out `fantasize` method into a `FantasizeMixin` class, so it isn't so widely inherited (#1462, #1479).
* Don't use Pyro JIT by default , since it was causing a memory leak (#1474).
* Use `get_default_partitioning_alpha` for NEHVI input constructor (#1481).

#### Bug Fixes
* Fix `batch_shape` property of `ModelListGPyTorchModel` (#1441).
* Tutorial fixes (#1446, #1475).
* Bug-fix for Proximal acquisition function wrapper for negative base acquisition functions (#1447).
* Handle `RuntimeError` due to constraint violation while sampling from priors (#1451).
* Fix bug in model list with output indices (#1453).
* Fix input transform bug when sequentially training a `BatchedMultiOutputGPyTorchModel` (#1454).
* Fix a bug in `_fit_multioutput_independent` that failed mll comparison (#1455).
* Fix box decomposition behavior with empty or None `Y` (#1489).


## [0.7.2] - Sep 27, 2022

#### New Features
Expand Down

0 comments on commit 7eada74

Please sign in to comment.