-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement additional kernels #6
Comments
Made some significant progress on this front (in the For the moment, you can use other kernels by importing the desired kernel functions from More broadly, we need to decide on a uniform way of handling kernel functions, as these are likely to be used in multiple parts of the implementation. One way to do it would involve pairing up the main kernel function and the derivative into a single object, though there may be other things to consider. One final performance note: the current implementation where the kernel derivative is computed separately from the log-likelihood gradient is inefficient in space, as it forms the entire (D x n x n) array in the kernel derivative function. Each of the D dimensions is used independently in the log-likelihood gradient computation, so the entire kernel derivative need not be stored all at once. One improvement is to just return one component of the gradient and re-use the storage of the (n x n) matrix when computing each component. This should only really be an issue on really large problems at the moment, but something to consider in future design decisions. |
Full kernel functionality is implemented. Any stationary kernel (i.e. depends only on a distance metric and a radial function of that distance metric) can be easily implemented now by subclassing the base Kernel class. The user only needs to define a radial function and first and second derivatives if desired to use the kernel function. Different distance metrics can be similarly specified by subclassing and the modifying the distance function and the respective derivatives of distance with respect to the hyperparameters if derivatives are desired. The only thing that is still missing is a nice interface to select the kernel in the main GP code. The interface to the GP class has gotten a bit clunky, and I plan to integrate this when re-working the interface code in the init method. However, kernels can be selected manually by modifying the |
The GP interface now allows selection of a kernel in PR #81, closing. |
Code could be extended to use kernels other than squared exponential.
The text was updated successfully, but these errors were encountered: