-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/gpu priors #211
Feature/gpu priors #211
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly looks good -- a few things in the code I though might be cleaner, and a few random thoughts/questions.
Two things I that do need to be fixed:
-
The correlation length priors appear to all enforce the same distribution (or I missed something) in
GaussianProcessGPU.py
-
Clarify what is going on with the distributions in
MeanPriors
in the C++ implementation (and that currently it only implements weak priors). In particular, I'd get rid of the other distributions and just make it clear with some comments that you need to make a bunch of weak prior objects for sampling purposes.
…ulators for n_params, in common with CPU implementation.
…for correlation length parameters
… to call C++ create_gppriors function
…or function which is a duplicate of set_nugget
…rams will get weak priors
…allow (for future) different prior dists to be set for different meanfunc parameters (even though they're not currently used in logpost calculation
Refactoring the C++/GPU implementation to hold hyperparameters in a GPParams object, and introduce priors for hyperparameters following the same logic as the Python/CPU implementation.
fitting.hpp
will use theGPPriors::sample
method to obtain starting values for the fit, rather than uniform random values.