Releases: pymc-devs/pytensor
Releases · pymc-devs/pytensor
rel-2.25.4
What's Changed
Bugfixes 🐛
- Fix bug due to
__props__
in OpFromGraph subclasses by @ricardoV94 in #981
Full Changelog: rel-2.25.3...rel-2.25.4
rel-2.25.3
What's Changed
New Features 🎉
- Add
einsum
by @jessegrabowski in #722 - Implements shape Ops and MakeVector in PyTorch by @twaclaw in #926
- Implement Dot and BatchedDot in PyTensor by @HangenYuu in #878
- Implement
pad
by @jessegrabowski in #748 - Implement nlinalg Ops in PyTorch by @twaclaw in #920
- Added rewrite for matrix inv(inv(x)) -> x by @tanish1729 in #893
Documentation 📖
- Added new tutorial on PRNGs with RandomVariables by @HangenYuu in #928
- Fixed dead wiki links by @HangenYuu in #950
- Removed emphasis on dtypes in Introduction by @Krupakar-Reddy-S in #968
Maintenance 🔧
- Add
OpFromGraph
wrapper aroundalloc_diag
by @jessegrabowski in #915 - Remove more unused config options by @Armavica in #948
- Add building of pyodide universal wheels by @twiecki in #918
- Unpin scipy upper version by @ferrine in #972
- Speedup CAReduce C-implementation with loop reordering by @ricardoV94 in #971
New Contributors
- @Ch0ronomato made their first contribution in #941
- @abhishekshah5486 made their first contribution in #964
- @Krupakar-Reddy-S made their first contribution in #968
Full Changelog: rel-2.25.2...rel-2.25.3
rel-2.25.2
What's Changed
New Features 🎉
- Implemented Sort/Argsort Ops in PyTorch by @twaclaw in #897
- Implemented Repeat and Unique Ops in PyTorch by @twaclaw in #890
Bugfixes 🐛
Documentation 📖
New Contributors
Full Changelog: rel-2.25.1...rel-2.25.2
rel-2.25.1
rel-2.25.0
What's Changed
Major Changes 🛠
- Replace str "output" by a dummy Op in the clients of the FunctionGraph by @ricardoV94 in #790
Bugfixes 🐛
- Increase precision of betainc C implementation by @ricardoV94 in #908
Maintenance 🔧
- Remove unused config options by @Armavica in #806
- Improve string representation of Assert Ops by @Dhruvanshu-Joshi in #891
Full Changelog: rel-2.24.2...rel-2.25.0
rel-2.24.2
What's Changed
New Features 🎉
- Add betainc C implementation by @arthus701 in #798
- Support more cases of advanced indexing in Numba by @ricardoV94 in #818
- Add
nan_to_num
helper by @Dhruvanshu-Joshi in #796 - Implemented Eye Op in PyTorch by @twaclaw in #877
- Add helper to build hessian vector product by @ricardoV94 in #858
- Vectorize
make_vector
by @ricardoV94 in #889
Bugfixes 🐛
- Fix bug in vectorize of random variables with empty size by @ricardoV94 in #886
Documentation 📖
- Run doctest and fix old examples by @ricardoV94 in #865
Maintenance 🔧
- Fix JAX implementation of Argmax by @HangenYuu in #809
- Remove useless SpecifyShape by @ricardoV94 in #885
- Keep stack trace in random_make_inplace by @ricardoV94 in #735
New Contributors
- @arthus701 made their first contribution in #798
- @twaclaw made their first contribution in #877
Full Changelog: rel-2.24.1...rel-2.24.2
rel-2.24.1
✅ Now available on conda-forge after major outage conda-forge/status#181 ✅
What's Changed
New Features 🎉
- Pytorch support for Join and Careduce Ops by @HarshvirSandhu in #869
- Add docs on implementing Pytorch Ops (and CumOp) by @HarshvirSandhu in #837
Bugfixes 🐛
- Fix too restrictive type assert by @ricardoV94 in #880
Full Changelog: rel-2.24.0...rel-2.24.1
rel-2.24.0
What's Changed
Major Changes 🛠
- Add initial support for PyTorch backend by @HarshvirSandhu in #764
- Break MaxandArgmax Op to seperate TensorMax Op and Argmax Op by @Dhruvanshu-Joshi in #731
New Features 🎉
- Implement basic Alloc Ops in PyTorch by @ricardoV94 in #836
- Do not use Numba objmode for supported advanced indexing operations by @ricardoV94 in #805
- Improve static output shapes of Reshape and AdvancedSubtensor1 by @ricardoV94 in #834
- Add more specialized static output shape to Eye by @ricardoV94 in #841
- Update
tensor.where
to allow for case with only condition by @tanish1729 in #844 - Implement JAX dispatch for Argsort and add
stable
argument to sorting functions by @ricardoV94 in #848 - Add rewrite to merge multiple SVD Ops with different settings by @HangenYuu in #769
- Implemented JAX backend for Eigvalsh by @HangenYuu in #867
- PyTorch Softmax Ops by @HAKSOAT in #846
- Rewrite determinant of diagonal matrix as product of diagonal by @tanish1729 in #797
Bugfixes 🐛
- Fix numba implementation of cholesky not setting off-diag entries to zero by @aseyboldt in #816
- Replace RNG update in RV lift rewrites by @ricardoV94 in #870
Maintenance 🔧
- Fix E721: do not compare types, for exact checks use is / is not by @maresb in #596
- Minor: Fix dependencies by @maresb in #813
- Add jax dispatch for
KroneckerProduct
Op
by @jessegrabowski in #822 - Fix typing in subtensor module by @michaelosthege in #823
- Upper pin scipy temporarily by @ricardoV94 in #863
- Remove conservative checks for supported Subtensors operations in JAX by @ricardoV94 in #849
- Check for square matrix in make_node to Det by @tanish1729 in #861
- Add dprint shortcut to FunctionGraph and Function by @ricardoV94 in #779
New Contributors
Full Changelog: rel-2.23.0...rel-2.24.0
rel-2.23.0
What's Changed
Major Changes 🛠
- Add support for random Generators in Numba backend by @ricardoV94 in #69. This PR introduces multiple (long delayed) breaking changes regarding
RandomVariable
s:- Number of default inputs is reduced from 3 (rng, size, dtype) to 2 (rng, size). Other than changes in the signature, any code that relied on positional indexing to split these parameters from the distribution parameters like
dist_params = x.owner.inputs[3:]
must be updated. A helperRandomVariable
methoddist_params
can be used instead likedist_params = x.owner.op.dist_params(x.owner)
- A distinct
RandomVariable
Op
must now be created for each distinctdtype
. Comparisons that don't care about this distinction that used to look likex.owner.op == normal
should be replaced byisinstance(x.owner.op, NormalRV)
- Support for legacy RandomState variables removed. Rng variables must now be of
RandomGenerator
type.randint
, which only worked withRandomState
rngs was removed. Useintegers
instead. size=None
is no longer internally converted tosize=()
.size=()
now behaves differently thansize=None
(just like Numpy)- Definition of
ndims_params
andndim_supp
is deprecated in favor of a numpy-like gufunc signature such as(),(p)->(p)
forMultinomialRV
.ndims_params
andndim_supp
are now derived from the signature, and available only as properties of initializedRandomVariable
Op
s. - Explicit
expand_dims
are now introduced for parameters that are being broadcasted internally by theRandomVariable
Op
, likeElemwise
andBlockwise
do.
- Number of default inputs is reduced from 3 (rng, size, dtype) to 2 (rng, size). Other than changes in the signature, any code that relied on positional indexing to split these parameters from the distribution parameters like
Bugfixes 🐛
- Fix gradient of
OpFromGraph
with disconnected/related outputs by @ricardoV94 in #723 - Fix bug in
slogdet
and expose it inlinalg
module by @theorashid in #807
Maintenance 🔧
- Do not reject PatternNodeRewriter due unrelated multiple clients by @ricardoV94 in #789
- Do not emmit confusing warning from FusionOptimizer by default by @ricardoV94 in #794
- Harmonize Scan rewrite and tag names by @ricardoV94 in #793
- Add types to functions in printing.py by @Armavica in #804
New Contributors
- @theorashid made their first contribution in #807
Full Changelog: rel-2.22.1...rel-2.23.0
rel-2.22.1
What's Changed
Bugfixes 🐛
- Fix numba AdvancedIncSubtensor1 with broadcasted values by @ricardoV94 in #757
- Allow fill_sink rewrite to accomodate changes in broadcastability by @ricardoV94 in #785
Maintenance 🔧
- Make
ifelse
accesible from root by @Dhruvanshu-Joshi in #777
Full Changelog: rel-2.22.0...rel-2.22.1