-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Equivalence tests #6
Conversation
test/equivalences.jl
Outdated
@test all(isapprox.(mean(gpr_post, x), mean(svgp_post, x), atol=1e-3)) | ||
@test all(isapprox.(cov(gpr_post, x), cov(svgp_post, x), atol=1e-3)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current state of this is:
Testing equivalence of the exact and sparse mean and cov works (with a fairly large tolerance of 1e-3) when only optimising the sparse variational posterior. Optimising kernel parameters at the same time does not give consistent enough behaviour - possibly need to use a different optimiser?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before investigating this further, how about one step simpler - set z=x
and then we can determine analytically what we need to set q
to for it to be exactly equivalent to GPR (then it should be equal within a very tight tolerance at most ~1e-8 or so in double precision).
Merging into #9 to put everything in one PR |
Adds tests which check the equivalence of certain models.
For example, sparse models with inducing inputs == training inputs should recover the exact GP posterior.