-
-
Notifications
You must be signed in to change notification settings - Fork 188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Imbedded Laplace Approximation #3097
base: develop
Are you sure you want to change the base?
Conversation
…ev/math into try-laplace_approximation
Jenkins Console Log Machine informationNo LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.3 LTS Release: 20.04 Codename: focalCPU: G++: Clang: |
Jenkins Console Log Machine informationNo LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.3 LTS Release: 20.04 Codename: focalCPU: G++: Clang: |
fvar<fvar<var>> target_ffvar = 0; | ||
VectorXd v(theta_size); | ||
VectorXd w(theta_size); | ||
for (Eigen::Index i = 0; i < hessian_block_size; ++i) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note for myself. I think for a large enough Hessian block size we could run this loop using a tbb parallel for loop
Summary
Code for the embedded laplace approximation. The tests are all passing but there is a few things still needed
The file for laplace have been added to the
mix
folder since it uses higher order auto diff.The current signature for the generalized laplace looks like the following in C++
which will translate to stan like the following
Note that the first tuple used for the likelihood arguments must be data.
Instead of using a tuple for the first functor's inputs and variadic arguments for the covariance functors arguments I would rather have them both be tuples like the following
I think this is nice because it makes it makes the tolerance parameters always sit at the end and both functors have the same input scheme for their arguments. Does anyone have thoughts on this
Other additions related to this PR
filter_map
function that applies a conditionally applies a lambdaf
to each input of a tuple given atype_trait
i.e. the following code would print "fp detected" twice and increment thedouble
elements of the tuple by 1.The test_ad suite now has a compile time option for only running the tests with only prim and reverse mode with a new boolean template parameter to
expect_ad
. This is needed to use laplace with the test framework as the laplace impl here does not work with higer order autodiff (since it needs higher order autodiff)Tests
Since the tests all seem very related I kept them in their own folder, is that alright? Or should I distribute them across the test folders like normal? While this PR is WIP I'm going to leave them in the same folder and if we don't want that then we can move them before we merge
Side Effects
Release notes
Checklist
Copyright holder: Simon's Foundation
The copyright holder is typically you or your assignee, such as a university or company. By submitting this pull request, the copyright holder is agreeing to the license the submitted work under the following licenses:
- Code: BSD 3-clause (https://opensource.org/licenses/BSD-3-Clause)
- Documentation: CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)
the basic tests are passing
./runTests.py test/unit
)make test-headers
)make test-math-dependencies
)make doxygen
)make cpplint
)the code is written in idiomatic C++ and changes are documented in the doxygen
the new changes are tested