From f6a96c4667dd06cdbb07996cbd7cec1b08b856e1 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Thu, 4 Jul 2024 20:27:59 +0000 Subject: [PATCH] Added navbar and removed insert_navbar.sh --- dev/api/index.html | 458 +++++++++++++++++- dev/api/laplace/index.html | 458 +++++++++++++++++- dev/api/sparsevariational/index.html | 458 +++++++++++++++++- dev/examples/a-regression/index.html | 458 +++++++++++++++++- dev/examples/b-classification/index.html | 458 +++++++++++++++++- dev/examples/c-comparisons/index.html | 458 +++++++++++++++++- dev/index.html | 458 +++++++++++++++++- dev/search/index.html | 458 +++++++++++++++++- dev/userguide/index.html | 458 +++++++++++++++++- index.html | 1 + previews/PR108/api/index.html | 458 +++++++++++++++++- previews/PR108/api/laplace/index.html | 458 +++++++++++++++++- .../PR108/api/sparsevariational/index.html | 458 +++++++++++++++++- .../PR108/examples/a-regression/index.html | 458 +++++++++++++++++- .../examples/b-classification/index.html | 458 +++++++++++++++++- .../PR108/examples/c-comparisons/index.html | 458 +++++++++++++++++- previews/PR108/index.html | 458 +++++++++++++++++- previews/PR108/search/index.html | 458 +++++++++++++++++- previews/PR108/userguide/index.html | 458 +++++++++++++++++- previews/PR143/api/index.html | 458 +++++++++++++++++- previews/PR143/api/laplace/index.html | 458 +++++++++++++++++- .../PR143/api/sparsevariational/index.html | 458 +++++++++++++++++- .../PR143/examples/a-regression/index.html | 458 +++++++++++++++++- .../examples/b-classification/index.html | 458 +++++++++++++++++- .../PR143/examples/c-comparisons/index.html | 458 +++++++++++++++++- previews/PR143/index.html | 458 +++++++++++++++++- previews/PR143/search/index.html | 458 +++++++++++++++++- previews/PR143/userguide/index.html | 458 +++++++++++++++++- previews/PR145/api/index.html | 458 +++++++++++++++++- previews/PR145/api/laplace/index.html | 458 +++++++++++++++++- .../PR145/api/sparsevariational/index.html | 458 +++++++++++++++++- .../PR145/examples/a-regression/index.html | 458 +++++++++++++++++- .../examples/b-classification/index.html | 458 +++++++++++++++++- .../PR145/examples/c-comparisons/index.html | 458 +++++++++++++++++- previews/PR145/index.html | 458 +++++++++++++++++- previews/PR145/search/index.html | 458 +++++++++++++++++- previews/PR145/userguide/index.html | 458 +++++++++++++++++- v0.2.8/api/index.html | 458 +++++++++++++++++- v0.2.8/examples/a-regression/index.html | 458 +++++++++++++++++- v0.2.8/examples/b-classification/index.html | 458 +++++++++++++++++- v0.2.8/examples/c-comparisons/index.html | 458 +++++++++++++++++- v0.2.8/index.html | 458 +++++++++++++++++- v0.2.8/search/index.html | 458 +++++++++++++++++- v0.2.8/userguide/index.html | 458 +++++++++++++++++- v0.3.4/api/index.html | 458 +++++++++++++++++- v0.3.4/api/laplace/index.html | 458 +++++++++++++++++- v0.3.4/api/sparsevariational/index.html | 458 +++++++++++++++++- v0.3.4/examples/a-regression/index.html | 458 +++++++++++++++++- v0.3.4/examples/b-classification/index.html | 458 +++++++++++++++++- v0.3.4/examples/c-comparisons/index.html | 458 +++++++++++++++++- v0.3.4/index.html | 458 +++++++++++++++++- v0.3.4/search/index.html | 458 +++++++++++++++++- v0.3.4/userguide/index.html | 458 +++++++++++++++++- v0.4.3/api/index.html | 458 +++++++++++++++++- v0.4.3/api/laplace/index.html | 458 +++++++++++++++++- v0.4.3/api/sparsevariational/index.html | 458 +++++++++++++++++- v0.4.3/examples/a-regression/index.html | 458 +++++++++++++++++- v0.4.3/examples/b-classification/index.html | 458 +++++++++++++++++- v0.4.3/examples/c-comparisons/index.html | 458 +++++++++++++++++- v0.4.3/index.html | 458 +++++++++++++++++- v0.4.3/search/index.html | 458 +++++++++++++++++- v0.4.3/userguide/index.html | 458 +++++++++++++++++- v0.4.4/api/index.html | 458 +++++++++++++++++- v0.4.4/api/laplace/index.html | 458 +++++++++++++++++- v0.4.4/api/sparsevariational/index.html | 458 +++++++++++++++++- v0.4.4/examples/a-regression/index.html | 458 +++++++++++++++++- v0.4.4/examples/b-classification/index.html | 458 +++++++++++++++++- v0.4.4/examples/c-comparisons/index.html | 458 +++++++++++++++++- v0.4.4/index.html | 458 +++++++++++++++++- v0.4.4/search/index.html | 458 +++++++++++++++++- v0.4.4/userguide/index.html | 458 +++++++++++++++++- v0.4.5/api/index.html | 458 +++++++++++++++++- v0.4.5/api/laplace/index.html | 458 +++++++++++++++++- v0.4.5/api/sparsevariational/index.html | 458 +++++++++++++++++- v0.4.5/examples/a-regression/index.html | 458 +++++++++++++++++- v0.4.5/examples/b-classification/index.html | 458 +++++++++++++++++- v0.4.5/examples/c-comparisons/index.html | 458 +++++++++++++++++- v0.4.5/index.html | 458 +++++++++++++++++- v0.4.5/search/index.html | 458 +++++++++++++++++- v0.4.5/userguide/index.html | 458 +++++++++++++++++- 80 files changed, 36104 insertions(+), 79 deletions(-) diff --git a/dev/api/index.html b/dev/api/index.html index ceb72cc0..1fc72253 100644 --- a/dev/api/index.html +++ b/dev/api/index.html @@ -1,2 +1,458 @@ -ApproximateGPs API · ApproximateGPs.jl

ApproximateGPs API

ApproximateGPs.API.approx_lmlFunction
approx_lml(approx::<Approximation>, lfx::LatentFiniteGP, ys)

Compute an approximation to the log of the marginal likelihood (also known as "evidence") under the given approx to the posterior. This approximation can be used to optimise the hyperparameters of lfx.

This should become part of the AbstractGPs API (see JuliaGaussianProcesses/AbstractGPs.jl#221).

source
+ApproximateGPs API · ApproximateGPs.jl + + + + + +

ApproximateGPs API

ApproximateGPs.API.approx_lmlFunction
approx_lml(approx::<Approximation>, lfx::LatentFiniteGP, ys)

Compute an approximation to the log of the marginal likelihood (also known as "evidence") under the given approx to the posterior. This approximation can be used to optimise the hyperparameters of lfx.

This should become part of the AbstractGPs API (see JuliaGaussianProcesses/AbstractGPs.jl#221).

source
+ diff --git a/dev/api/laplace/index.html b/dev/api/laplace/index.html index 7c9e081b..4aa0dc09 100644 --- a/dev/api/laplace/index.html +++ b/dev/api/laplace/index.html @@ -1,2 +1,458 @@ -Laplace Approximation · ApproximateGPs.jl

Laplace Approximation

ApproximateGPs.LaplaceApproximationModule.build_laplace_objectiveMethod
build_laplace_objective(build_latent_gp, xs, ys; kwargs...)

Construct a closure that computes the minimisation objective for optimising hyperparameters of the latent GP in the Laplace approximation. The returned closure passes its arguments to build_latent_gp, which must return the LatentGP prior.

Keyword arguments

  • newton_warmstart=true: (default) begin Newton optimisation at the mode of the previous call to the objective
  • newton_callback: called as newton_callback(fnew, cache) after each Newton step
  • newton_maxiter=100: maximum number of Newton steps.
source
+Laplace Approximation · ApproximateGPs.jl + + + + + +

Laplace Approximation

ApproximateGPs.LaplaceApproximationModule.build_laplace_objectiveMethod
build_laplace_objective(build_latent_gp, xs, ys; kwargs...)

Construct a closure that computes the minimisation objective for optimising hyperparameters of the latent GP in the Laplace approximation. The returned closure passes its arguments to build_latent_gp, which must return the LatentGP prior.

Keyword arguments

  • newton_warmstart=true: (default) begin Newton optimisation at the mode of the previous call to the objective
  • newton_callback: called as newton_callback(fnew, cache) after each Newton step
  • newton_maxiter=100: maximum number of Newton steps.
source
+ diff --git a/dev/api/sparsevariational/index.html b/dev/api/sparsevariational/index.html index 2c0b24bd..8ca2542e 100644 --- a/dev/api/sparsevariational/index.html +++ b/dev/api/sparsevariational/index.html @@ -1,4 +1,460 @@ -Sparse Variational Approximation · ApproximateGPs.jl

Sparse Variational Approximation

ApproximateGPs.SparseVariationalApproximationModule.SparseVariationalApproximationMethod
SparseVariationalApproximation(
+Sparse Variational Approximation · ApproximateGPs.jl
+
+
+
+
+
+

Sparse Variational Approximation

ApproximateGPs.SparseVariationalApproximationModule.SparseVariationalApproximationMethod
SparseVariationalApproximation(
     ::Parametrization, fz::FiniteGP, q::AbstractMvNormal
 ) where {Parametrization}

Produce a SparseVariationalApproximation{Parametrization}, which packages the prior over the pseudo-points, fz, and the approximate posterior at the pseudo-points, q, together into a single object.

The Parametrization determines the precise manner in which q and fz are interpreted. Existing parametrizations include Centered and NonCentered.

source
+ diff --git a/dev/examples/a-regression/index.html b/dev/examples/a-regression/index.html index 6ffd56c8..7c111308 100644 --- a/dev/examples/a-regression/index.html +++ b/dev/examples/a-regression/index.html @@ -1,5 +1,460 @@ -Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl · ApproximateGPs.jl

A recreation of https://gpflow.readthedocs.io/en/master/notebooks/advanced/gps_for_big_data.html

Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


In this example, we show how to construct and train the stochastic variational Gaussian process (SVGP) model for efficient inference in large scale datasets. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl · ApproximateGPs.jl
+
+
+
+
+
+

A recreation of https://gpflow.readthedocs.io/en/master/notebooks/advanced/gps_for_big_data.html

Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


In this example, we show how to construct and train the stochastic variational Gaussian process (SVGP) model for efficient inference in large scale datasets. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using Distributions
 using LinearAlgebra
 
@@ -134,3 +589,4 @@ 
Package and system information
JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/7M86H/src

This page was generated using Literate.jl.

+ diff --git a/dev/examples/b-classification/index.html b/dev/examples/b-classification/index.html index c060f593..6537310f 100644 --- a/dev/examples/b-classification/index.html +++ b/dev/examples/b-classification/index.html @@ -1,5 +1,460 @@ -Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS · ApproximateGPs.jl

Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the stochastic variational Gaussian process (SVGP) model. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS · ApproximateGPs.jl
+
+
+
+
+
+

Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the stochastic variational Gaussian process (SVGP) model. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using ParameterHandling
 using Zygote
 using Distributions
@@ -416,3 +871,4 @@ 
Package and system information
JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/7M86H/src

This page was generated using Literate.jl.

+ diff --git a/dev/examples/c-comparisons/index.html b/dev/examples/c-comparisons/index.html index 3a8e385e..bebbe517 100644 --- a/dev/examples/c-comparisons/index.html +++ b/dev/examples/c-comparisons/index.html @@ -1,5 +1,460 @@ -Binary Classification with Laplace approximation · ApproximateGPs.jl

Binary Classification with Laplace approximation

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the Laplace approximation.

For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Binary Classification with Laplace approximation · ApproximateGPs.jl
+
+
+
+
+
+

Binary Classification with Laplace approximation

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the Laplace approximation.

For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using LinearAlgebra
 using Distributions
 using LogExpFunctions: logistic, softplus, invsoftplus
@@ -622,3 +1077,4 @@ 
Package and system information
JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/7M86H/src

This page was generated using Literate.jl.

+ diff --git a/dev/index.html b/dev/index.html index 918d1692..d20d1558 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,458 @@ -Home · ApproximateGPs.jl

ApproximateGPs.jl

ApproximateGPs.jl is a package that provides some approximate inference algorithms for Gaussian process models, built on top of AbstractGPs.jl.

+Home · ApproximateGPs.jl + + + + + +

ApproximateGPs.jl

ApproximateGPs.jl is a package that provides some approximate inference algorithms for Gaussian process models, built on top of AbstractGPs.jl.

+ diff --git a/dev/search/index.html b/dev/search/index.html index 972da251..40b5af0f 100644 --- a/dev/search/index.html +++ b/dev/search/index.html @@ -1,2 +1,458 @@ -Search · ApproximateGPs.jl +Search · ApproximateGPs.jl + + + + + + + diff --git a/dev/userguide/index.html b/dev/userguide/index.html index 5135fd72..541aef56 100644 --- a/dev/userguide/index.html +++ b/dev/userguide/index.html @@ -1,5 +1,460 @@ -User Guide · ApproximateGPs.jl

User Guide

Setup

ApproximateGPs builds on top of AbstractGPs.jl, so all of its features are reexported automatically by ApproximateGPs.

using ApproximateGPs, Random
+User Guide · ApproximateGPs.jl
+
+
+
+
+
+

User Guide

Setup

ApproximateGPs builds on top of AbstractGPs.jl, so all of its features are reexported automatically by ApproximateGPs.

using ApproximateGPs, Random
 rnd = MersenneTwister(1453)  # set a random seed

First, we construct a prior Gaussian process with a Matern-3/2 kernel and zero mean function, and sample some data. More exotic kernels can be constructed using KernelFunctions.jl.

f = GP(Matern32Kernel())
 
 x = rand(rng, 100)
@@ -11,3 +466,4 @@
 
 sva_posterior = posterior(approx)  # Create the approximate posterior

The Evidence Lower Bound (ELBO)

The approximate posterior constructed above will be a very poor approximation, since q was simply chosen to have zero mean and covariance I. A measure of the quality of the approximation is given by the ELBO. Optimising this term with respect to the parameters of q and the inducing input locations z will improve the approximation.

elbo(SparseVariationalApproximation(fz, q), fx, y)

A detailed example of how to carry out such optimisation is given in Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl. For an example of non-conjugate inference, see Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS.

Available Parametrizations

Two parametrizations of q(u) are presently available: Centered and NonCentered. The Centered parametrization expresses q(u) directly in terms of its mean and covariance. The NonCentered parametrization instead parametrizes the mean and covariance of ε := cholesky(cov(u)).U' \ (u - mean(u)). These parametrizations are also known respectively as "Unwhitened" and "Whitened".

The choice of parametrization can have a substantial impact on the time it takes for ELBO optimization to converge, and which parametrization is better in a particular situation is not generally obvious. That being said, the NonCentered parametrization often converges in fewer iterations, so it is the default – it is what is used in all of the examples above.

If you require a particular parametrization, simply use the 3-argument version of the approximation constructor:

SparseVariationalApproximation(Centered(), fz, q)
 SparseVariationalApproximation(NonCentered(), fz, q)

For a general discussion around these two parametrizations, see e.g. [Gorinova]. For a GP-specific discussion, see e.g. section 3.4 of [Paciorek].

+ diff --git a/index.html b/index.html index 6a5afc30..3ac25969 100644 --- a/index.html +++ b/index.html @@ -1,2 +1,3 @@ + diff --git a/previews/PR108/api/index.html b/previews/PR108/api/index.html index f52cbc05..6143cdb5 100644 --- a/previews/PR108/api/index.html +++ b/previews/PR108/api/index.html @@ -1,2 +1,458 @@ -ApproximateGPs API · ApproximateGPs.jl

ApproximateGPs API

ApproximateGPs.API.approx_lmlFunction
approx_lml(approx::<Approximation>, lfx::LatentFiniteGP, ys)

Compute an approximation to the log of the marginal likelihood (also known as "evidence") under the given approx to the posterior. This approximation can be used to optimise the hyperparameters of lfx.

This should become part of the AbstractGPs API (see JuliaGaussianProcesses/AbstractGPs.jl#221).

source
+ApproximateGPs API · ApproximateGPs.jl + + + + + +

ApproximateGPs API

ApproximateGPs.API.approx_lmlFunction
approx_lml(approx::<Approximation>, lfx::LatentFiniteGP, ys)

Compute an approximation to the log of the marginal likelihood (also known as "evidence") under the given approx to the posterior. This approximation can be used to optimise the hyperparameters of lfx.

This should become part of the AbstractGPs API (see JuliaGaussianProcesses/AbstractGPs.jl#221).

source
+ diff --git a/previews/PR108/api/laplace/index.html b/previews/PR108/api/laplace/index.html index b7138600..d803fbed 100644 --- a/previews/PR108/api/laplace/index.html +++ b/previews/PR108/api/laplace/index.html @@ -1,2 +1,458 @@ -Laplace Approximation · ApproximateGPs.jl

Laplace Approximation

ApproximateGPs.LaplaceApproximationModule.build_laplace_objectiveMethod
build_laplace_objective(build_latent_gp, xs, ys; kwargs...)

Construct a closure that computes the minimisation objective for optimising hyperparameters of the latent GP in the Laplace approximation. The returned closure passes its arguments to build_latent_gp, which must return the LatentGP prior.

Keyword arguments

  • newton_warmstart=true: (default) begin Newton optimisation at the mode of the previous call to the objective
  • newton_callback: called as newton_callback(fnew, cache) after each Newton step
  • newton_maxiter=100: maximum number of Newton steps.
source
+Laplace Approximation · ApproximateGPs.jl + + + + + +

Laplace Approximation

ApproximateGPs.LaplaceApproximationModule.build_laplace_objectiveMethod
build_laplace_objective(build_latent_gp, xs, ys; kwargs...)

Construct a closure that computes the minimisation objective for optimising hyperparameters of the latent GP in the Laplace approximation. The returned closure passes its arguments to build_latent_gp, which must return the LatentGP prior.

Keyword arguments

  • newton_warmstart=true: (default) begin Newton optimisation at the mode of the previous call to the objective
  • newton_callback: called as newton_callback(fnew, cache) after each Newton step
  • newton_maxiter=100: maximum number of Newton steps.
source
+ diff --git a/previews/PR108/api/sparsevariational/index.html b/previews/PR108/api/sparsevariational/index.html index e5dc5055..3bf17752 100644 --- a/previews/PR108/api/sparsevariational/index.html +++ b/previews/PR108/api/sparsevariational/index.html @@ -1,4 +1,460 @@ -Sparse Variational Approximation · ApproximateGPs.jl

Sparse Variational Approximation

ApproximateGPs.SparseVariationalApproximationModule.SparseVariationalApproximationMethod
SparseVariationalApproximation(
+Sparse Variational Approximation · ApproximateGPs.jl
+
+
+
+
+
+

Sparse Variational Approximation

ApproximateGPs.SparseVariationalApproximationModule.SparseVariationalApproximationMethod
SparseVariationalApproximation(
     ::Parametrization, fz::FiniteGP, q::AbstractMvNormal
 ) where {Parametrization}

Produce a SparseVariationalApproximation{Parametrization}, which packages the prior over the pseudo-points, fz, and the approximate posterior at the pseudo-points, q, together into a single object.

The Parametrization determines the precise manner in which q and fz are interpreted. Existing parametrizations include Centered and NonCentered.

source
+ diff --git a/previews/PR108/examples/a-regression/index.html b/previews/PR108/examples/a-regression/index.html index b2246b4f..6b75eaa5 100644 --- a/previews/PR108/examples/a-regression/index.html +++ b/previews/PR108/examples/a-regression/index.html @@ -1,5 +1,460 @@ -Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl · ApproximateGPs.jl

A recreation of https://gpflow.readthedocs.io/en/master/notebooks/advanced/gps_for_big_data.html

Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


In this example, we show how to construct and train the stochastic variational Gaussian process (SVGP) model for efficient inference in large scale datasets. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl · ApproximateGPs.jl
+
+
+
+
+
+

A recreation of https://gpflow.readthedocs.io/en/master/notebooks/advanced/gps_for_big_data.html

Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


In this example, we show how to construct and train the stochastic variational Gaussian process (SVGP) model for efficient inference in large scale datasets. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using Distributions
 using LinearAlgebra
 
@@ -134,3 +589,4 @@ 
Package and system information
JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/e8FS0/src

This page was generated using Literate.jl.

+ diff --git a/previews/PR108/examples/b-classification/index.html b/previews/PR108/examples/b-classification/index.html index 2f5cedce..07ae3761 100644 --- a/previews/PR108/examples/b-classification/index.html +++ b/previews/PR108/examples/b-classification/index.html @@ -1,5 +1,460 @@ -Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS · ApproximateGPs.jl

Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the stochastic variational Gaussian process (SVGP) model. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS · ApproximateGPs.jl
+
+
+
+
+
+

Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the stochastic variational Gaussian process (SVGP) model. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using ParameterHandling
 using Zygote
 using Distributions
@@ -416,3 +871,4 @@ 
Package and system information
JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/e8FS0/src

This page was generated using Literate.jl.

+ diff --git a/previews/PR108/examples/c-comparisons/index.html b/previews/PR108/examples/c-comparisons/index.html index ed18f8f0..730a0b90 100644 --- a/previews/PR108/examples/c-comparisons/index.html +++ b/previews/PR108/examples/c-comparisons/index.html @@ -1,5 +1,460 @@ -Binary Classification with Laplace approximation · ApproximateGPs.jl

Binary Classification with Laplace approximation

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the Laplace approximation.

For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Binary Classification with Laplace approximation · ApproximateGPs.jl
+
+
+
+
+
+

Binary Classification with Laplace approximation

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the Laplace approximation.

For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using LinearAlgebra
 using Distributions
 using LogExpFunctions: logistic, softplus, invsoftplus
@@ -622,3 +1077,4 @@ 
Package and system information
JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/e8FS0/src

This page was generated using Literate.jl.

+ diff --git a/previews/PR108/index.html b/previews/PR108/index.html index 988b3989..1ef703e6 100644 --- a/previews/PR108/index.html +++ b/previews/PR108/index.html @@ -1,2 +1,458 @@ -Home · ApproximateGPs.jl

ApproximateGPs.jl

ApproximateGPs.jl is a package that provides some approximate inference algorithms for Gaussian process models, built on top of AbstractGPs.jl.

+Home · ApproximateGPs.jl + + + + + +

ApproximateGPs.jl

ApproximateGPs.jl is a package that provides some approximate inference algorithms for Gaussian process models, built on top of AbstractGPs.jl.

+ diff --git a/previews/PR108/search/index.html b/previews/PR108/search/index.html index 2e34df1c..8ddc46d3 100644 --- a/previews/PR108/search/index.html +++ b/previews/PR108/search/index.html @@ -1,2 +1,458 @@ -Search · ApproximateGPs.jl +Search · ApproximateGPs.jl + + + + + + + diff --git a/previews/PR108/userguide/index.html b/previews/PR108/userguide/index.html index 5e969e48..0c79f016 100644 --- a/previews/PR108/userguide/index.html +++ b/previews/PR108/userguide/index.html @@ -1,5 +1,460 @@ -User Guide · ApproximateGPs.jl

User Guide

Setup

ApproximateGPs builds on top of AbstractGPs.jl, so all of its features are reexported automatically by ApproximateGPs.

using ApproximateGPs, Random
+User Guide · ApproximateGPs.jl
+
+
+
+
+
+

User Guide

Setup

ApproximateGPs builds on top of AbstractGPs.jl, so all of its features are reexported automatically by ApproximateGPs.

using ApproximateGPs, Random
 rnd = MersenneTwister(1453)  # set a random seed

First, we construct a prior Gaussian process with a Matern-3/2 kernel and zero mean function, and sample some data. More exotic kernels can be constructed using KernelFunctions.jl.

f = GP(Matern32Kernel())
 
 x = rand(rng, 100)
@@ -11,3 +466,4 @@
 
 sva_posterior = posterior(approx)  # Create the approximate posterior

The Evidence Lower Bound (ELBO)

The approximate posterior constructed above will be a very poor approximation, since q was simply chosen to have zero mean and covariance I. A measure of the quality of the approximation is given by the ELBO. Optimising this term with respect to the parameters of q and the inducing input locations z will improve the approximation.

elbo(SparseVariationalApproximation(fz, q), fx, y)

A detailed example of how to carry out such optimisation is given in Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl. For an example of non-conjugate inference, see Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS.

Available Parametrizations

Two parametrizations of q(u) are presently available: Centered and NonCentered. The Centered parametrization expresses q(u) directly in terms of its mean and covariance. The NonCentered parametrization instead parametrizes the mean and covariance of ε := cholesky(cov(u)).U' \ (u - mean(u)). These parametrizations are also known respectively as "Unwhitened" and "Whitened".

The choice of parametrization can have a substantial impact on the time it takes for ELBO optimization to converge, and which parametrization is better in a particular situation is not generally obvious. That being said, the NonCentered parametrization often converges in fewer iterations, so it is the default – it is what is used in all of the examples above.

If you require a particular parametrization, simply use the 3-argument version of the approximation constructor:

SparseVariationalApproximation(Centered(), fz, q)
 SparseVariationalApproximation(NonCentered(), fz, q)

For a general discussion around these two parametrizations, see e.g. [Gorinova]. For a GP-specific discussion, see e.g. section 3.4 of [Paciorek].

+ diff --git a/previews/PR143/api/index.html b/previews/PR143/api/index.html index d36d4140..fd0a2e9b 100644 --- a/previews/PR143/api/index.html +++ b/previews/PR143/api/index.html @@ -1,2 +1,458 @@ -ApproximateGPs API · ApproximateGPs.jl

ApproximateGPs API

ApproximateGPs.API.approx_lmlFunction
approx_lml(approx::<Approximation>, lfx::LatentFiniteGP, ys)

Compute an approximation to the log of the marginal likelihood (also known as "evidence") under the given approx to the posterior. This approximation can be used to optimise the hyperparameters of lfx.

This should become part of the AbstractGPs API (see JuliaGaussianProcesses/AbstractGPs.jl#221).

source
+ApproximateGPs API · ApproximateGPs.jl + + + + + +

ApproximateGPs API

ApproximateGPs.API.approx_lmlFunction
approx_lml(approx::<Approximation>, lfx::LatentFiniteGP, ys)

Compute an approximation to the log of the marginal likelihood (also known as "evidence") under the given approx to the posterior. This approximation can be used to optimise the hyperparameters of lfx.

This should become part of the AbstractGPs API (see JuliaGaussianProcesses/AbstractGPs.jl#221).

source
+ diff --git a/previews/PR143/api/laplace/index.html b/previews/PR143/api/laplace/index.html index 5d8b837d..10cb8a28 100644 --- a/previews/PR143/api/laplace/index.html +++ b/previews/PR143/api/laplace/index.html @@ -1,2 +1,458 @@ -Laplace Approximation · ApproximateGPs.jl

Laplace Approximation

ApproximateGPs.LaplaceApproximationModule.build_laplace_objectiveMethod
build_laplace_objective(build_latent_gp, xs, ys; kwargs...)

Construct a closure that computes the minimisation objective for optimising hyperparameters of the latent GP in the Laplace approximation. The returned closure passes its arguments to build_latent_gp, which must return the LatentGP prior.

Keyword arguments

  • newton_warmstart=true: (default) begin Newton optimisation at the mode of the previous call to the objective
  • newton_callback: called as newton_callback(fnew, cache) after each Newton step
  • newton_maxiter=100: maximum number of Newton steps.
source
+Laplace Approximation · ApproximateGPs.jl + + + + + +

Laplace Approximation

ApproximateGPs.LaplaceApproximationModule.build_laplace_objectiveMethod
build_laplace_objective(build_latent_gp, xs, ys; kwargs...)

Construct a closure that computes the minimisation objective for optimising hyperparameters of the latent GP in the Laplace approximation. The returned closure passes its arguments to build_latent_gp, which must return the LatentGP prior.

Keyword arguments

  • newton_warmstart=true: (default) begin Newton optimisation at the mode of the previous call to the objective
  • newton_callback: called as newton_callback(fnew, cache) after each Newton step
  • newton_maxiter=100: maximum number of Newton steps.
source
+ diff --git a/previews/PR143/api/sparsevariational/index.html b/previews/PR143/api/sparsevariational/index.html index 89c1d6d7..2a11dc33 100644 --- a/previews/PR143/api/sparsevariational/index.html +++ b/previews/PR143/api/sparsevariational/index.html @@ -1,4 +1,460 @@ -Sparse Variational Approximation · ApproximateGPs.jl

Sparse Variational Approximation

ApproximateGPs.SparseVariationalApproximationModule.SparseVariationalApproximationMethod
SparseVariationalApproximation(
+Sparse Variational Approximation · ApproximateGPs.jl
+
+
+
+
+
+

Sparse Variational Approximation

ApproximateGPs.SparseVariationalApproximationModule.SparseVariationalApproximationMethod
SparseVariationalApproximation(
     ::Parametrization, fz::FiniteGP, q::AbstractMvNormal
 ) where {Parametrization}

Produce a SparseVariationalApproximation{Parametrization}, which packages the prior over the pseudo-points, fz, and the approximate posterior at the pseudo-points, q, together into a single object.

The Parametrization determines the precise manner in which q and fz are interpreted. Existing parametrizations include Centered and NonCentered.

source
+ diff --git a/previews/PR143/examples/a-regression/index.html b/previews/PR143/examples/a-regression/index.html index 185f8816..3bb5546e 100644 --- a/previews/PR143/examples/a-regression/index.html +++ b/previews/PR143/examples/a-regression/index.html @@ -1,5 +1,460 @@ -Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl · ApproximateGPs.jl

A recreation of https://gpflow.readthedocs.io/en/master/notebooks/advanced/gps_for_big_data.html

Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


In this example, we show how to construct and train the stochastic variational Gaussian process (SVGP) model for efficient inference in large scale datasets. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl · ApproximateGPs.jl
+
+
+
+
+
+

A recreation of https://gpflow.readthedocs.io/en/master/notebooks/advanced/gps_for_big_data.html

Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


In this example, we show how to construct and train the stochastic variational Gaussian process (SVGP) model for efficient inference in large scale datasets. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using Distributions
 using LinearAlgebra
 
@@ -134,3 +589,4 @@ 
Package and system information
JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/e8FS0/src

This page was generated using Literate.jl.

+ diff --git a/previews/PR143/examples/b-classification/index.html b/previews/PR143/examples/b-classification/index.html index f4d4d0c7..d68c3a5d 100644 --- a/previews/PR143/examples/b-classification/index.html +++ b/previews/PR143/examples/b-classification/index.html @@ -1,5 +1,460 @@ -Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS · ApproximateGPs.jl

Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the stochastic variational Gaussian process (SVGP) model. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS · ApproximateGPs.jl
+
+
+
+
+
+

Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the stochastic variational Gaussian process (SVGP) model. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using ParameterHandling
 using Zygote
 using Distributions
@@ -416,3 +871,4 @@ 
Package and system information
JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/e8FS0/src

This page was generated using Literate.jl.

+ diff --git a/previews/PR143/examples/c-comparisons/index.html b/previews/PR143/examples/c-comparisons/index.html index 0b5943ba..b4f3903a 100644 --- a/previews/PR143/examples/c-comparisons/index.html +++ b/previews/PR143/examples/c-comparisons/index.html @@ -1,5 +1,460 @@ -Binary Classification with Laplace approximation · ApproximateGPs.jl

Binary Classification with Laplace approximation

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the Laplace approximation.

For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Binary Classification with Laplace approximation · ApproximateGPs.jl
+
+
+
+
+
+

Binary Classification with Laplace approximation

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the Laplace approximation.

For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using LinearAlgebra
 using Distributions
 using LogExpFunctions: logistic, softplus, invsoftplus
@@ -622,3 +1077,4 @@ 
Package and system information
JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/e8FS0/src

This page was generated using Literate.jl.

+ diff --git a/previews/PR143/index.html b/previews/PR143/index.html index 2eecdf4b..7ceacc66 100644 --- a/previews/PR143/index.html +++ b/previews/PR143/index.html @@ -1,2 +1,458 @@ -Home · ApproximateGPs.jl

ApproximateGPs.jl

ApproximateGPs.jl is a package that provides some approximate inference algorithms for Gaussian process models, built on top of AbstractGPs.jl.

+Home · ApproximateGPs.jl + + + + + +

ApproximateGPs.jl

ApproximateGPs.jl is a package that provides some approximate inference algorithms for Gaussian process models, built on top of AbstractGPs.jl.

+ diff --git a/previews/PR143/search/index.html b/previews/PR143/search/index.html index c5c4fd20..0d86bc1a 100644 --- a/previews/PR143/search/index.html +++ b/previews/PR143/search/index.html @@ -1,2 +1,458 @@ -Search · ApproximateGPs.jl +Search · ApproximateGPs.jl + + + + + + + diff --git a/previews/PR143/userguide/index.html b/previews/PR143/userguide/index.html index 33505296..029da838 100644 --- a/previews/PR143/userguide/index.html +++ b/previews/PR143/userguide/index.html @@ -1,5 +1,460 @@ -User Guide · ApproximateGPs.jl

User Guide

Setup

ApproximateGPs builds on top of AbstractGPs.jl, so all of its features are reexported automatically by ApproximateGPs.

using ApproximateGPs, Random
+User Guide · ApproximateGPs.jl
+
+
+
+
+
+

User Guide

Setup

ApproximateGPs builds on top of AbstractGPs.jl, so all of its features are reexported automatically by ApproximateGPs.

using ApproximateGPs, Random
 rnd = MersenneTwister(1453)  # set a random seed

First, we construct a prior Gaussian process with a Matern-3/2 kernel and zero mean function, and sample some data. More exotic kernels can be constructed using KernelFunctions.jl.

f = GP(Matern32Kernel())
 
 x = rand(rng, 100)
@@ -11,3 +466,4 @@
 
 sva_posterior = posterior(approx)  # Create the approximate posterior

The Evidence Lower Bound (ELBO)

The approximate posterior constructed above will be a very poor approximation, since q was simply chosen to have zero mean and covariance I. A measure of the quality of the approximation is given by the ELBO. Optimising this term with respect to the parameters of q and the inducing input locations z will improve the approximation.

elbo(SparseVariationalApproximation(fz, q), fx, y)

A detailed example of how to carry out such optimisation is given in Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl. For an example of non-conjugate inference, see Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS.

Available Parametrizations

Two parametrizations of q(u) are presently available: Centered and NonCentered. The Centered parametrization expresses q(u) directly in terms of its mean and covariance. The NonCentered parametrization instead parametrizes the mean and covariance of ε := cholesky(cov(u)).U' \ (u - mean(u)). These parametrizations are also known respectively as "Unwhitened" and "Whitened".

The choice of parametrization can have a substantial impact on the time it takes for ELBO optimization to converge, and which parametrization is better in a particular situation is not generally obvious. That being said, the NonCentered parametrization often converges in fewer iterations, so it is the default – it is what is used in all of the examples above.

If you require a particular parametrization, simply use the 3-argument version of the approximation constructor:

SparseVariationalApproximation(Centered(), fz, q)
 SparseVariationalApproximation(NonCentered(), fz, q)

For a general discussion around these two parametrizations, see e.g. [Gorinova]. For a GP-specific discussion, see e.g. section 3.4 of [Paciorek].

+ diff --git a/previews/PR145/api/index.html b/previews/PR145/api/index.html index cf3d17a0..bac1121a 100644 --- a/previews/PR145/api/index.html +++ b/previews/PR145/api/index.html @@ -1,2 +1,458 @@ -ApproximateGPs API · ApproximateGPs.jl +ApproximateGPs API · ApproximateGPs.jl + + + + + + + diff --git a/previews/PR145/api/laplace/index.html b/previews/PR145/api/laplace/index.html index 29972b09..751051fe 100644 --- a/previews/PR145/api/laplace/index.html +++ b/previews/PR145/api/laplace/index.html @@ -1,2 +1,458 @@ -Laplace Approximation · ApproximateGPs.jl

Laplace Approximation

ApproximateGPs.LaplaceApproximationModule.build_laplace_objectiveMethod
build_laplace_objective(build_latent_gp, xs, ys; kwargs...)

Construct a closure that computes the minimisation objective for optimising hyperparameters of the latent GP in the Laplace approximation. The returned closure passes its arguments to build_latent_gp, which must return the LatentGP prior.

Keyword arguments

  • newton_warmstart=true: (default) begin Newton optimisation at the mode of the previous call to the objective
  • newton_callback: called as newton_callback(fnew, cache) after each Newton step
  • newton_maxiter=100: maximum number of Newton steps.
source
+Laplace Approximation · ApproximateGPs.jl + + + + + +

Laplace Approximation

ApproximateGPs.LaplaceApproximationModule.build_laplace_objectiveMethod
build_laplace_objective(build_latent_gp, xs, ys; kwargs...)

Construct a closure that computes the minimisation objective for optimising hyperparameters of the latent GP in the Laplace approximation. The returned closure passes its arguments to build_latent_gp, which must return the LatentGP prior.

Keyword arguments

  • newton_warmstart=true: (default) begin Newton optimisation at the mode of the previous call to the objective
  • newton_callback: called as newton_callback(fnew, cache) after each Newton step
  • newton_maxiter=100: maximum number of Newton steps.
source
+ diff --git a/previews/PR145/api/sparsevariational/index.html b/previews/PR145/api/sparsevariational/index.html index 1282f5aa..77a4a947 100644 --- a/previews/PR145/api/sparsevariational/index.html +++ b/previews/PR145/api/sparsevariational/index.html @@ -1,4 +1,460 @@ -Sparse Variational Approximation · ApproximateGPs.jl

Sparse Variational Approximation

ApproximateGPs.SparseVariationalApproximationModule.SparseVariationalApproximationMethod
SparseVariationalApproximation(
+Sparse Variational Approximation · ApproximateGPs.jl
+
+
+
+
+
+

Sparse Variational Approximation

ApproximateGPs.SparseVariationalApproximationModule.SparseVariationalApproximationMethod
SparseVariationalApproximation(
     ::Parametrization, fz::FiniteGP, q::AbstractMvNormal
 ) where {Parametrization}

Produce a SparseVariationalApproximation{Parametrization}, which packages the prior over the pseudo-points, fz, and the approximate posterior at the pseudo-points, q, together into a single object.

The Parametrization determines the precise manner in which q and fz are interpreted. Existing parametrizations include Centered and NonCentered.

source
+ diff --git a/previews/PR145/examples/a-regression/index.html b/previews/PR145/examples/a-regression/index.html index 430131a4..e5aeeae3 100644 --- a/previews/PR145/examples/a-regression/index.html +++ b/previews/PR145/examples/a-regression/index.html @@ -1,5 +1,460 @@ -Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl · ApproximateGPs.jl

A recreation of https://gpflow.readthedocs.io/en/master/notebooks/advanced/gps_for_big_data.html

Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


In this example, we show how to construct and train the stochastic variational Gaussian process (SVGP) model for efficient inference in large scale datasets. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl · ApproximateGPs.jl
+
+
+
+
+
+

A recreation of https://gpflow.readthedocs.io/en/master/notebooks/advanced/gps_for_big_data.html

Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


In this example, we show how to construct and train the stochastic variational Gaussian process (SVGP) model for efficient inference in large scale datasets. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using Distributions
 using LinearAlgebra
 
@@ -134,3 +589,4 @@ 
Package and system information
JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/7M86H/src

This page was generated using Literate.jl.

+ diff --git a/previews/PR145/examples/b-classification/index.html b/previews/PR145/examples/b-classification/index.html index ff9b40e9..c099c3c7 100644 --- a/previews/PR145/examples/b-classification/index.html +++ b/previews/PR145/examples/b-classification/index.html @@ -1,5 +1,460 @@ -Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS · ApproximateGPs.jl

Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the stochastic variational Gaussian process (SVGP) model. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS · ApproximateGPs.jl
+
+
+
+
+
+

Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the stochastic variational Gaussian process (SVGP) model. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using ParameterHandling
 using Zygote
 using Distributions
@@ -416,3 +871,4 @@ 
Package and system information
JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/7M86H/src

This page was generated using Literate.jl.

+ diff --git a/previews/PR145/examples/c-comparisons/index.html b/previews/PR145/examples/c-comparisons/index.html index 912689fc..0b81e546 100644 --- a/previews/PR145/examples/c-comparisons/index.html +++ b/previews/PR145/examples/c-comparisons/index.html @@ -1,5 +1,460 @@ -Binary Classification with Laplace approximation · ApproximateGPs.jl

Binary Classification with Laplace approximation

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the Laplace approximation.

For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Binary Classification with Laplace approximation · ApproximateGPs.jl
+
+
+
+
+
+

Binary Classification with Laplace approximation

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the Laplace approximation.

For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using LinearAlgebra
 using Distributions
 using LogExpFunctions: logistic, softplus, invsoftplus
@@ -622,3 +1077,4 @@ 
Package and system information
JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/7M86H/src

This page was generated using Literate.jl.

+ diff --git a/previews/PR145/index.html b/previews/PR145/index.html index 0813acd0..c6e4d305 100644 --- a/previews/PR145/index.html +++ b/previews/PR145/index.html @@ -1,2 +1,458 @@ -Home · ApproximateGPs.jl

ApproximateGPs.jl

ApproximateGPs.jl is a package that provides some approximate inference algorithms for Gaussian process models, built on top of AbstractGPs.jl.

+Home · ApproximateGPs.jl + + + + + +

ApproximateGPs.jl

ApproximateGPs.jl is a package that provides some approximate inference algorithms for Gaussian process models, built on top of AbstractGPs.jl.

+ diff --git a/previews/PR145/search/index.html b/previews/PR145/search/index.html index 244ee2e1..308a08dd 100644 --- a/previews/PR145/search/index.html +++ b/previews/PR145/search/index.html @@ -1,2 +1,458 @@ -Search · ApproximateGPs.jl +Search · ApproximateGPs.jl + + + + + + + diff --git a/previews/PR145/userguide/index.html b/previews/PR145/userguide/index.html index b4ec8cda..f96d3e22 100644 --- a/previews/PR145/userguide/index.html +++ b/previews/PR145/userguide/index.html @@ -1,5 +1,460 @@ -User Guide · ApproximateGPs.jl

User Guide

Setup

ApproximateGPs builds on top of AbstractGPs.jl, so all of its features are reexported automatically by ApproximateGPs.

using ApproximateGPs, Random
+User Guide · ApproximateGPs.jl
+
+
+
+
+
+

User Guide

Setup

ApproximateGPs builds on top of AbstractGPs.jl, so all of its features are reexported automatically by ApproximateGPs.

using ApproximateGPs, Random
 rnd = MersenneTwister(1453)  # set a random seed

First, we construct a prior Gaussian process with a Matern-3/2 kernel and zero mean function, and sample some data. More exotic kernels can be constructed using KernelFunctions.jl.

f = GP(Matern32Kernel())
 
 x = rand(rng, 100)
@@ -11,3 +466,4 @@
 
 sva_posterior = posterior(approx)  # Create the approximate posterior

The Evidence Lower Bound (ELBO)

The approximate posterior constructed above will be a very poor approximation, since q was simply chosen to have zero mean and covariance I. A measure of the quality of the approximation is given by the ELBO. Optimising this term with respect to the parameters of q and the inducing input locations z will improve the approximation.

elbo(SparseVariationalApproximation(fz, q), fx, y)

A detailed example of how to carry out such optimisation is given in Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl. For an example of non-conjugate inference, see Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS.

Available Parametrizations

Two parametrizations of q(u) are presently available: Centered and NonCentered. The Centered parametrization expresses q(u) directly in terms of its mean and covariance. The NonCentered parametrization instead parametrizes the mean and covariance of ε := cholesky(cov(u)).U' \ (u - mean(u)). These parametrizations are also known respectively as "Unwhitened" and "Whitened".

The choice of parametrization can have a substantial impact on the time it takes for ELBO optimization to converge, and which parametrization is better in a particular situation is not generally obvious. That being said, the NonCentered parametrization often converges in fewer iterations, so it is the default – it is what is used in all of the examples above.

If you require a particular parametrization, simply use the 3-argument version of the approximation constructor:

SparseVariationalApproximation(Centered(), fz, q)
 SparseVariationalApproximation(NonCentered(), fz, q)

For a general discussion around these two parametrizations, see e.g. [Gorinova]. For a GP-specific discussion, see e.g. section 3.4 of [Paciorek].

+ diff --git a/v0.2.8/api/index.html b/v0.2.8/api/index.html index b7fb9800..cb40fe71 100644 --- a/v0.2.8/api/index.html +++ b/v0.2.8/api/index.html @@ -1,5 +1,460 @@ -API · ApproximateGPs.jl

ApproximateGPs API

ApproximateGPs.CenteredType
Centered()

Used in conjunction with SparseVariationalApproximation. States that the q field of SparseVariationalApproximation is to be interpreted directly as the approximate posterior over the pseudo-points.

This is also known as the "unwhitened" parametrization [1].

See also NonCentered.

[1] - https://en.wikipedia.org/wiki/Whitening_transformation

source
ApproximateGPs.NonCenteredType
NonCentered()

Used in conjunction with SparseVariationalApproximation. States that the q field of SparseVariationalApproximation is to be interpreted as the approximate posterior over cholesky(cov(u)).L \ (u - mean(u)), where u are the pseudo-points.

This is also known as the "whitened" parametrization [1].

See also Centered.

[1] - https://en.wikipedia.org/wiki/Whitening_transformation

source
ApproximateGPs.SparseVariationalApproximationMethod
SparseVariationalApproximation(fz::FiniteGP, q::AbstractMvNormal)

Packages the prior over the pseudo-points fz, and the approximate posterior at the pseudo-points, which is mean(fz) + cholesky(cov(fz)).L * ε, ε ∼ q.

Shorthand for

SparseVariationalApproximation(NonCentered(), fz, q)
source
ApproximateGPs.SparseVariationalApproximationMethod
SparseVariationalApproximation(
+API · ApproximateGPs.jl
+
+
+
+
+
+

ApproximateGPs API

ApproximateGPs.CenteredType
Centered()

Used in conjunction with SparseVariationalApproximation. States that the q field of SparseVariationalApproximation is to be interpreted directly as the approximate posterior over the pseudo-points.

This is also known as the "unwhitened" parametrization [1].

See also NonCentered.

[1] - https://en.wikipedia.org/wiki/Whitening_transformation

source
ApproximateGPs.NonCenteredType
NonCentered()

Used in conjunction with SparseVariationalApproximation. States that the q field of SparseVariationalApproximation is to be interpreted as the approximate posterior over cholesky(cov(u)).L \ (u - mean(u)), where u are the pseudo-points.

This is also known as the "whitened" parametrization [1].

See also Centered.

[1] - https://en.wikipedia.org/wiki/Whitening_transformation

source
ApproximateGPs.SparseVariationalApproximationMethod
SparseVariationalApproximation(fz::FiniteGP, q::AbstractMvNormal)

Packages the prior over the pseudo-points fz, and the approximate posterior at the pseudo-points, which is mean(fz) + cholesky(cov(fz)).L * ε, ε ∼ q.

Shorthand for

SparseVariationalApproximation(NonCentered(), fz, q)
source
ApproximateGPs.SparseVariationalApproximationMethod
SparseVariationalApproximation(
     ::Parametrization, fz::FiniteGP, q::AbstractMvNormal
 ) where {Parametrization}

Produce a SparseVariationalApproximation{Parametrization}, which packages the prior over the pseudo-points, fz, and the approximate posterior at the pseudo-points, q, together into a single object.

The Parametrization determines the precise manner in which q and fz are interpreted. Existing parametrizations include Centered and NonCentered.

source
AbstractGPs.elboMethod
elbo(
     sva::SparseVariationalApproximation,
@@ -14,3 +469,4 @@
     num_data=length(y),
     quadrature=DefaultQuadrature(),
 )

Compute the ELBO for a LatentGP with a possibly non-conjugate likelihood.

source
AbstractGPs.posteriorMethod
posterior(la::LaplaceApproximation, lfx::LatentFiniteGP, ys)

Construct a Gaussian approximation q(f) to the posterior p(f | y) using the Laplace approximation. Solves for a mode of the posterior using Newton's method.

source
AbstractGPs.posteriorMethod
posterior(sva::SparseVariationalApproximation{Centered})

Compute the approximate posterior [1] over the process f = sva.fz.f, given inducing inputs z = sva.fz.x and a variational distribution over inducing points sva.q (which represents $q(u)$ where u = f(z)). The approximate posterior at test points $x^*$ where $f^* = f(x^*)$ is then given by:

\[q(f^*) = \int p(f | u) q(u) du\]

which can be found in closed form.

[1] - Hensman, James, Alexander Matthews, and Zoubin Ghahramani. "Scalable variational Gaussian process classification." Artificial Intelligence and Statistics. PMLR, 2015.

source
AbstractGPs.posteriorMethod
posterior(sva::SparseVariationalApproximation{NonCentered})

Compute the approximate posterior [1] over the process f = sva.fz.f, given inducing inputs z = sva.fz.x and a variational distribution over inducing points sva.q (which represents $q(ε)$ where ε = cholesky(cov(fz)).L \ (f(z) - mean(f(z)))). The approximate posterior at test points $x^*$ where $f^* = f(x^*)$ is then given by:

\[q(f^*) = \int p(f | ε) q(ε) du\]

which can be found in closed form.

[1] - Hensman, James, Alexander Matthews, and Zoubin Ghahramani. "Scalable variational Gaussian process classification." Artificial Intelligence and Statistics. PMLR, 2015.

source
ApproximateGPs.approx_lmlMethod
approx_lml(la::LaplaceApproximation, lfx::LatentFiniteGP, ys)

Compute an approximation to the log of the marginal likelihood (also known as "evidence"), which can be used to optimise the hyperparameters of lfx.

This should become part of the AbstractGPs API (see JuliaGaussianProcesses/AbstractGPs.jl#221).

source
ApproximateGPs.build_laplace_objectiveMethod
build_laplace_objective(build_latent_gp, xs, ys; kwargs...)

Construct a closure that computes the minimisation objective for optimising hyperparameters of the latent GP in the Laplace approximation. The returned closure passes its arguments to build_latent_gp, which must return the LatentGP prior.

Keyword arguments

  • newton_warmstart=true: (default) begin Newton optimisation at the mode of the previous call to the objective
  • newton_callback: called as newton_callback(fnew, cache) after each Newton step
  • newton_maxiter=100: maximum number of Newton steps.
source
ApproximateGPs.expected_loglikMethod
expected_loglik(quadrature::QuadratureMethod, y::AbstractVector, q_f::AbstractVector{<:Normal}, lik)

This function computes the expected log likelihood:

\[ ∫ q(f) log p(y | f) df\]

where p(y | f) is the process likelihood. This is described by lik, which should be a callable that takes f as input and returns a Distribution over y that supports loglikelihood(lik(f), y).

q(f) is an approximation to the latent function values f given by:

\[ q(f) = ∫ p(f | u) q(u) du\]

where q(u) is the variational distribution over inducing points (see elbo). The marginal distributions of q(f) are given by q_f.

quadrature determines which method is used to calculate the expected log likelihood - see elbo for more details.

Extended help

q(f) is assumed to be an MvNormal distribution and p(y | f) is assumed to have independent marginals such that only the marginals of q(f) are required.

source
ApproximateGPs.expected_loglikMethod
expected_loglik(::DefaultQuadrature, y::AbstractVector, q_f::AbstractVector{<:Normal}, lik)

The expected log likelihood. Defaults to a closed form solution if it exists, otherwise defaults to Gauss-Hermite quadrature.

source
ApproximateGPs.laplace_f_and_lmlMethod
laplace_f_and_lml(lfx::LatentFiniteGP, ys; newton_kwargs...)

Compute a mode of the posterior and the Laplace approximation to the log marginal likelihood.

source
ApproximateGPs.laplace_f_covMethod
laplace_f_cov(cache)

Compute the covariance of q(f) from the results of the training computation that are stored in a LaplaceCache.

source
ApproximateGPs.laplace_lmlMethod
laplace_lml(lfx::LatentFiniteGP, ys; newton_kwargs...)

Compute the Laplace approximation to the log marginal likelihood.

source
ApproximateGPs.laplace_stepsMethod
laplace_steps(lfx::LatentFiniteGP, ys; newton_kwargs...)

For demonstration purposes: returns an array of all the intermediate approximations of each Newton step.

If you are only interested in the actual posterior, use posterior(::LaplaceApproximation, ....

TODO figure out how to get the @ref to work to point to the LaplaceApproximation-specific posterior docstring...

source
ApproximateGPs.loglik_and_derivsMethod
loglik_and_derivs(dist_y_given_f, ys, f)

dist_y_given_f must be a scalar function from a Real to a Distribution object. ys and f are vectors of observations and latent function values, respectively.

source
+ diff --git a/v0.2.8/examples/a-regression/index.html b/v0.2.8/examples/a-regression/index.html index 208853d2..57e99666 100644 --- a/v0.2.8/examples/a-regression/index.html +++ b/v0.2.8/examples/a-regression/index.html @@ -1,5 +1,460 @@ -Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl · ApproximateGPs.jl

A recreation of https://gpflow.readthedocs.io/en/master/notebooks/advanced/gps_for_big_data.html

Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.

In this example, we show how to construct and train the stochastic variational Gaussian process (SVGP) model for efficient inference in large scale datasets. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl · ApproximateGPs.jl
+
+
+
+
+
+

A recreation of https://gpflow.readthedocs.io/en/master/notebooks/advanced/gps_for_big_data.html

Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.

In this example, we show how to construct and train the stochastic variational Gaussian process (SVGP) model for efficient inference in large scale datasets. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using Distributions
 using LinearAlgebra
 
@@ -115,3 +570,4 @@
   LLVM: libLLVM-11.0.1 (ORCJIT, skylake-avx512)
 Environment:
   JULIA_DEBUG = Documenter

Manifest

To reproduce this notebook's package environment, you can download the full Manifest.toml.


This page was generated using Literate.jl.

+ diff --git a/v0.2.8/examples/b-classification/index.html b/v0.2.8/examples/b-classification/index.html index ded80b0e..ded5c687 100644 --- a/v0.2.8/examples/b-classification/index.html +++ b/v0.2.8/examples/b-classification/index.html @@ -1,5 +1,460 @@ -Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS · ApproximateGPs.jl

Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.

This example demonstrates how to carry out non-conjugate Gaussian process inference using the stochastic variational Gaussian process (SVGP) model. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS · ApproximateGPs.jl
+
+
+
+
+
+

Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.

This example demonstrates how to carry out non-conjugate Gaussian process inference using the stochastic variational Gaussian process (SVGP) model. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using ParameterHandling
 using Zygote
 using Distributions
@@ -1431,3 +1886,4 @@ 

Manifest

To reproduce this notebook's package environment, you can download the full Manifest.toml.


This page was generated using Literate.jl.

+ diff --git a/v0.2.8/examples/c-comparisons/index.html b/v0.2.8/examples/c-comparisons/index.html index 677fdbf1..a7aa794d 100644 --- a/v0.2.8/examples/c-comparisons/index.html +++ b/v0.2.8/examples/c-comparisons/index.html @@ -1,5 +1,460 @@ -Binary Classification with Laplace approximation · ApproximateGPs.jl

Binary Classification with Laplace approximation

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.

This example demonstrates how to carry out non-conjugate Gaussian process inference using the Laplace approximation.

For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Binary Classification with Laplace approximation · ApproximateGPs.jl
+
+
+
+
+
+

Binary Classification with Laplace approximation

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.

This example demonstrates how to carry out non-conjugate Gaussian process inference using the Laplace approximation.

For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using LinearAlgebra
 using Distributions
 using LogExpFunctions: logistic, softplus, invsoftplus
@@ -7946,3 +8401,4 @@ 

Manifest

To reproduce this notebook's package environment, you can download the full Manifest.toml.


This page was generated using Literate.jl.

+ diff --git a/v0.2.8/index.html b/v0.2.8/index.html index 05c84666..5223d47f 100644 --- a/v0.2.8/index.html +++ b/v0.2.8/index.html @@ -1,2 +1,458 @@ -Home · ApproximateGPs.jl
+Home · ApproximateGPs.jl + + + + + +
+ diff --git a/v0.2.8/search/index.html b/v0.2.8/search/index.html index 3a0d04f5..9ba9bd6f 100644 --- a/v0.2.8/search/index.html +++ b/v0.2.8/search/index.html @@ -1,2 +1,458 @@ -Search · ApproximateGPs.jl +Search · ApproximateGPs.jl + + + + + + + diff --git a/v0.2.8/userguide/index.html b/v0.2.8/userguide/index.html index 27f09ca0..d6ce2fa2 100644 --- a/v0.2.8/userguide/index.html +++ b/v0.2.8/userguide/index.html @@ -1,5 +1,460 @@ -User Guide · ApproximateGPs.jl

User Guide

Setup

ApproximateGPs builds on top of AbstractGPs.jl, so all of its features are reexported automatically by ApproximateGPs.

using ApproximateGPs, Random
+User Guide · ApproximateGPs.jl
+
+
+
+
+
+

User Guide

Setup

ApproximateGPs builds on top of AbstractGPs.jl, so all of its features are reexported automatically by ApproximateGPs.

using ApproximateGPs, Random
 rnd = MersenneTwister(1453)  # set a random seed

First, we construct a prior Gaussian process with a Matern-3/2 kernel and zero mean function, and sample some data. More exotic kernels can be constructed using KernelFunctions.jl.

f = GP(Matern32Kernel())
 
 x = rand(rng, 100)
@@ -11,3 +466,4 @@
 
 sva_posterior = posterior(approx)  # Create the approximate posterior

The Evidence Lower Bound (ELBO)

The approximate posterior constructed above will be a very poor approximation, since q was simply chosen to have zero mean and covariance I. A measure of the quality of the approximation is given by the ELBO. Optimising this term with respect to the parameters of q and the inducing input locations z will improve the approximation.

elbo(SparseVariationalApproximation(fz, q), fx, y)

A detailed example of how to carry out such optimisation is given in Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl. For an example of non-conjugate inference, see Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS.

Available Parametrizations

Two parametrizations of q(u) are presently available: Centered and NonCentered. The Centered parametrization expresses q(u) directly in terms of its mean and covariance. The NonCentered parametrization instead parametrizes the mean and covariance of ε := cholesky(cov(u)).U' \ (u - mean(u)). These parametrizations are also known respectively as "Unwhitened" and "Whitened".

The choice of parametrization can have a substantial impact on the time it takes for ELBO optimization to converge, and which parametrization is better in a particular situation is not generally obvious. That being said, the NonCentered parametrization often converges in fewer iterations, so it is the default – it is what is used in all of the examples above.

If you require a particular parametrization, simply use the 3-argument version of the approximation constructor:

SparseVariationalApproximation(Centered(), fz, q)
 SparseVariationalApproximation(NonCentered(), fz, q)

For a general discussion around these two parametrizations, see e.g. [Gorinova]. For a GP-specific discussion, see e.g. section 3.4 of [Paciorek].

+ diff --git a/v0.3.4/api/index.html b/v0.3.4/api/index.html index 0b58f3a1..1170a000 100644 --- a/v0.3.4/api/index.html +++ b/v0.3.4/api/index.html @@ -1,2 +1,458 @@ -ApproximateGPs API · ApproximateGPs.jl

ApproximateGPs API

ApproximateGPs.API.approx_lmlFunction
approx_lml(approx::<Approximation>, lfx::LatentFiniteGP, ys)

Compute an approximation to the log of the marginal likelihood (also known as "evidence") under the given approx to the posterior. This approximation can be used to optimise the hyperparameters of lfx.

This should become part of the AbstractGPs API (see JuliaGaussianProcesses/AbstractGPs.jl#221).

source
+ApproximateGPs API · ApproximateGPs.jl + + + + + +

ApproximateGPs API

ApproximateGPs.API.approx_lmlFunction
approx_lml(approx::<Approximation>, lfx::LatentFiniteGP, ys)

Compute an approximation to the log of the marginal likelihood (also known as "evidence") under the given approx to the posterior. This approximation can be used to optimise the hyperparameters of lfx.

This should become part of the AbstractGPs API (see JuliaGaussianProcesses/AbstractGPs.jl#221).

source
+ diff --git a/v0.3.4/api/laplace/index.html b/v0.3.4/api/laplace/index.html index ef3f367f..210e1bf2 100644 --- a/v0.3.4/api/laplace/index.html +++ b/v0.3.4/api/laplace/index.html @@ -1,2 +1,458 @@ -Laplace Approximation · ApproximateGPs.jl

Laplace Approximation

ApproximateGPs.LaplaceApproximationModule.build_laplace_objectiveMethod
build_laplace_objective(build_latent_gp, xs, ys; kwargs...)

Construct a closure that computes the minimisation objective for optimising hyperparameters of the latent GP in the Laplace approximation. The returned closure passes its arguments to build_latent_gp, which must return the LatentGP prior.

Keyword arguments

  • newton_warmstart=true: (default) begin Newton optimisation at the mode of the previous call to the objective
  • newton_callback: called as newton_callback(fnew, cache) after each Newton step
  • newton_maxiter=100: maximum number of Newton steps.
source
+Laplace Approximation · ApproximateGPs.jl + + + + + +

Laplace Approximation

ApproximateGPs.LaplaceApproximationModule.build_laplace_objectiveMethod
build_laplace_objective(build_latent_gp, xs, ys; kwargs...)

Construct a closure that computes the minimisation objective for optimising hyperparameters of the latent GP in the Laplace approximation. The returned closure passes its arguments to build_latent_gp, which must return the LatentGP prior.

Keyword arguments

  • newton_warmstart=true: (default) begin Newton optimisation at the mode of the previous call to the objective
  • newton_callback: called as newton_callback(fnew, cache) after each Newton step
  • newton_maxiter=100: maximum number of Newton steps.
source
+ diff --git a/v0.3.4/api/sparsevariational/index.html b/v0.3.4/api/sparsevariational/index.html index ec72aa70..5301e2ea 100644 --- a/v0.3.4/api/sparsevariational/index.html +++ b/v0.3.4/api/sparsevariational/index.html @@ -1,4 +1,460 @@ -Sparse Variational Approximation · ApproximateGPs.jl

Sparse Variational Approximation

ApproximateGPs.SparseVariationalApproximationModule.SparseVariationalApproximationMethod
SparseVariationalApproximation(
+Sparse Variational Approximation · ApproximateGPs.jl
+
+
+
+
+
+

Sparse Variational Approximation

ApproximateGPs.SparseVariationalApproximationModule.SparseVariationalApproximationMethod
SparseVariationalApproximation(
     ::Parametrization, fz::FiniteGP, q::AbstractMvNormal
 ) where {Parametrization}

Produce a SparseVariationalApproximation{Parametrization}, which packages the prior over the pseudo-points, fz, and the approximate posterior at the pseudo-points, q, together into a single object.

The Parametrization determines the precise manner in which q and fz are interpreted. Existing parametrizations include Centered and NonCentered.

source
+ diff --git a/v0.3.4/examples/a-regression/index.html b/v0.3.4/examples/a-regression/index.html index fc38f3ac..bfcb037f 100644 --- a/v0.3.4/examples/a-regression/index.html +++ b/v0.3.4/examples/a-regression/index.html @@ -1,5 +1,460 @@ -Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl · ApproximateGPs.jl

A recreation of https://gpflow.readthedocs.io/en/master/notebooks/advanced/gps_for_big_data.html

Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


In this example, we show how to construct and train the stochastic variational Gaussian process (SVGP) model for efficient inference in large scale datasets. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl · ApproximateGPs.jl
+
+
+
+
+
+

A recreation of https://gpflow.readthedocs.io/en/master/notebooks/advanced/gps_for_big_data.html

Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


In this example, we show how to construct and train the stochastic variational Gaussian process (SVGP) model for efficient inference in large scale datasets. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using Distributions
 using LinearAlgebra
 
@@ -131,3 +586,4 @@ 
Package and system information
JULIA_DEBUG = Documenter

This page was generated using Literate.jl.

+ diff --git a/v0.3.4/examples/b-classification/index.html b/v0.3.4/examples/b-classification/index.html index 6507d058..3bf9649d 100644 --- a/v0.3.4/examples/b-classification/index.html +++ b/v0.3.4/examples/b-classification/index.html @@ -1,5 +1,460 @@ -Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS · ApproximateGPs.jl

Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the stochastic variational Gaussian process (SVGP) model. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS · ApproximateGPs.jl
+
+
+
+
+
+

Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the stochastic variational Gaussian process (SVGP) model. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using ParameterHandling
 using Zygote
 using Distributions
@@ -1447,3 +1902,4 @@ 
Package and system information
JULIA_DEBUG = Documenter

This page was generated using Literate.jl.

+ diff --git a/v0.3.4/examples/c-comparisons/index.html b/v0.3.4/examples/c-comparisons/index.html index 531bad9b..8664c47d 100644 --- a/v0.3.4/examples/c-comparisons/index.html +++ b/v0.3.4/examples/c-comparisons/index.html @@ -1,5 +1,460 @@ -Binary Classification with Laplace approximation · ApproximateGPs.jl

Binary Classification with Laplace approximation

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the Laplace approximation.

For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Binary Classification with Laplace approximation · ApproximateGPs.jl
+
+
+
+
+
+

Binary Classification with Laplace approximation

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the Laplace approximation.

For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using LinearAlgebra
 using Distributions
 using LogExpFunctions: logistic, softplus, invsoftplus
@@ -7962,3 +8417,4 @@ 
Package and system information
JULIA_DEBUG = Documenter

This page was generated using Literate.jl.

+ diff --git a/v0.3.4/index.html b/v0.3.4/index.html index b433cce2..5af3eb0b 100644 --- a/v0.3.4/index.html +++ b/v0.3.4/index.html @@ -1,2 +1,458 @@ -Home · ApproximateGPs.jl

ApproximateGPs.jl

ApproximateGPs.jl is a package that provides some approximate inference algorithms for Gaussian process models, built on top of AbstractGPs.jl.

+Home · ApproximateGPs.jl + + + + + +

ApproximateGPs.jl

ApproximateGPs.jl is a package that provides some approximate inference algorithms for Gaussian process models, built on top of AbstractGPs.jl.

+ diff --git a/v0.3.4/search/index.html b/v0.3.4/search/index.html index d50395ac..d7eb3a4d 100644 --- a/v0.3.4/search/index.html +++ b/v0.3.4/search/index.html @@ -1,2 +1,458 @@ -Search · ApproximateGPs.jl +Search · ApproximateGPs.jl + + + + + + + diff --git a/v0.3.4/userguide/index.html b/v0.3.4/userguide/index.html index 4fbefea9..bf7666b0 100644 --- a/v0.3.4/userguide/index.html +++ b/v0.3.4/userguide/index.html @@ -1,5 +1,460 @@ -User Guide · ApproximateGPs.jl

User Guide

Setup

ApproximateGPs builds on top of AbstractGPs.jl, so all of its features are reexported automatically by ApproximateGPs.

using ApproximateGPs, Random
+User Guide · ApproximateGPs.jl
+
+
+
+
+
+

User Guide

Setup

ApproximateGPs builds on top of AbstractGPs.jl, so all of its features are reexported automatically by ApproximateGPs.

using ApproximateGPs, Random
 rnd = MersenneTwister(1453)  # set a random seed

First, we construct a prior Gaussian process with a Matern-3/2 kernel and zero mean function, and sample some data. More exotic kernels can be constructed using KernelFunctions.jl.

f = GP(Matern32Kernel())
 
 x = rand(rng, 100)
@@ -11,3 +466,4 @@
 
 sva_posterior = posterior(approx)  # Create the approximate posterior

The Evidence Lower Bound (ELBO)

The approximate posterior constructed above will be a very poor approximation, since q was simply chosen to have zero mean and covariance I. A measure of the quality of the approximation is given by the ELBO. Optimising this term with respect to the parameters of q and the inducing input locations z will improve the approximation.

elbo(SparseVariationalApproximation(fz, q), fx, y)

A detailed example of how to carry out such optimisation is given in Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl. For an example of non-conjugate inference, see Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS.

Available Parametrizations

Two parametrizations of q(u) are presently available: Centered and NonCentered. The Centered parametrization expresses q(u) directly in terms of its mean and covariance. The NonCentered parametrization instead parametrizes the mean and covariance of ε := cholesky(cov(u)).U' \ (u - mean(u)). These parametrizations are also known respectively as "Unwhitened" and "Whitened".

The choice of parametrization can have a substantial impact on the time it takes for ELBO optimization to converge, and which parametrization is better in a particular situation is not generally obvious. That being said, the NonCentered parametrization often converges in fewer iterations, so it is the default – it is what is used in all of the examples above.

If you require a particular parametrization, simply use the 3-argument version of the approximation constructor:

SparseVariationalApproximation(Centered(), fz, q)
 SparseVariationalApproximation(NonCentered(), fz, q)

For a general discussion around these two parametrizations, see e.g. [Gorinova]. For a GP-specific discussion, see e.g. section 3.4 of [Paciorek].

+ diff --git a/v0.4.3/api/index.html b/v0.4.3/api/index.html index f908b90f..1eb0904e 100644 --- a/v0.4.3/api/index.html +++ b/v0.4.3/api/index.html @@ -1,2 +1,458 @@ -ApproximateGPs API · ApproximateGPs.jl

ApproximateGPs API

ApproximateGPs.API.approx_lmlFunction
approx_lml(approx::<Approximation>, lfx::LatentFiniteGP, ys)

Compute an approximation to the log of the marginal likelihood (also known as "evidence") under the given approx to the posterior. This approximation can be used to optimise the hyperparameters of lfx.

This should become part of the AbstractGPs API (see JuliaGaussianProcesses/AbstractGPs.jl#221).

source
+ApproximateGPs API · ApproximateGPs.jl + + + + + +

ApproximateGPs API

ApproximateGPs.API.approx_lmlFunction
approx_lml(approx::<Approximation>, lfx::LatentFiniteGP, ys)

Compute an approximation to the log of the marginal likelihood (also known as "evidence") under the given approx to the posterior. This approximation can be used to optimise the hyperparameters of lfx.

This should become part of the AbstractGPs API (see JuliaGaussianProcesses/AbstractGPs.jl#221).

source
+ diff --git a/v0.4.3/api/laplace/index.html b/v0.4.3/api/laplace/index.html index ea2500f1..ace1b184 100644 --- a/v0.4.3/api/laplace/index.html +++ b/v0.4.3/api/laplace/index.html @@ -1,2 +1,458 @@ -Laplace Approximation · ApproximateGPs.jl

Laplace Approximation

ApproximateGPs.LaplaceApproximationModule.build_laplace_objectiveMethod
build_laplace_objective(build_latent_gp, xs, ys; kwargs...)

Construct a closure that computes the minimisation objective for optimising hyperparameters of the latent GP in the Laplace approximation. The returned closure passes its arguments to build_latent_gp, which must return the LatentGP prior.

Keyword arguments

  • newton_warmstart=true: (default) begin Newton optimisation at the mode of the previous call to the objective
  • newton_callback: called as newton_callback(fnew, cache) after each Newton step
  • newton_maxiter=100: maximum number of Newton steps.
source
+Laplace Approximation · ApproximateGPs.jl + + + + + +

Laplace Approximation

ApproximateGPs.LaplaceApproximationModule.build_laplace_objectiveMethod
build_laplace_objective(build_latent_gp, xs, ys; kwargs...)

Construct a closure that computes the minimisation objective for optimising hyperparameters of the latent GP in the Laplace approximation. The returned closure passes its arguments to build_latent_gp, which must return the LatentGP prior.

Keyword arguments

  • newton_warmstart=true: (default) begin Newton optimisation at the mode of the previous call to the objective
  • newton_callback: called as newton_callback(fnew, cache) after each Newton step
  • newton_maxiter=100: maximum number of Newton steps.
source
+ diff --git a/v0.4.3/api/sparsevariational/index.html b/v0.4.3/api/sparsevariational/index.html index 41aaa344..27ec192e 100644 --- a/v0.4.3/api/sparsevariational/index.html +++ b/v0.4.3/api/sparsevariational/index.html @@ -1,4 +1,460 @@ -Sparse Variational Approximation · ApproximateGPs.jl

Sparse Variational Approximation

ApproximateGPs.SparseVariationalApproximationModule.SparseVariationalApproximationMethod
SparseVariationalApproximation(
+Sparse Variational Approximation · ApproximateGPs.jl
+
+
+
+
+
+

Sparse Variational Approximation

ApproximateGPs.SparseVariationalApproximationModule.SparseVariationalApproximationMethod
SparseVariationalApproximation(
     ::Parametrization, fz::FiniteGP, q::AbstractMvNormal
 ) where {Parametrization}

Produce a SparseVariationalApproximation{Parametrization}, which packages the prior over the pseudo-points, fz, and the approximate posterior at the pseudo-points, q, together into a single object.

The Parametrization determines the precise manner in which q and fz are interpreted. Existing parametrizations include Centered and NonCentered.

source
+ diff --git a/v0.4.3/examples/a-regression/index.html b/v0.4.3/examples/a-regression/index.html index ef9be7c0..9b12373a 100644 --- a/v0.4.3/examples/a-regression/index.html +++ b/v0.4.3/examples/a-regression/index.html @@ -1,5 +1,460 @@ -Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl · ApproximateGPs.jl

A recreation of https://gpflow.readthedocs.io/en/master/notebooks/advanced/gps_for_big_data.html

Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


In this example, we show how to construct and train the stochastic variational Gaussian process (SVGP) model for efficient inference in large scale datasets. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl · ApproximateGPs.jl
+
+
+
+
+
+

A recreation of https://gpflow.readthedocs.io/en/master/notebooks/advanced/gps_for_big_data.html

Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


In this example, we show how to construct and train the stochastic variational Gaussian process (SVGP) model for efficient inference in large scale datasets. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using Distributions
 using LinearAlgebra
 
@@ -132,3 +587,4 @@ 
Package and system information
JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/fsJ6N/src

This page was generated using Literate.jl.

+ diff --git a/v0.4.3/examples/b-classification/index.html b/v0.4.3/examples/b-classification/index.html index 5b7113b8..ab714ec4 100644 --- a/v0.4.3/examples/b-classification/index.html +++ b/v0.4.3/examples/b-classification/index.html @@ -1,5 +1,460 @@ -Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS · ApproximateGPs.jl

Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the stochastic variational Gaussian process (SVGP) model. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS · ApproximateGPs.jl
+
+
+
+
+
+

Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the stochastic variational Gaussian process (SVGP) model. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using ParameterHandling
 using Zygote
 using Distributions
@@ -1448,3 +1903,4 @@ 
Package and system information
JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/fsJ6N/src

This page was generated using Literate.jl.

+ diff --git a/v0.4.3/examples/c-comparisons/index.html b/v0.4.3/examples/c-comparisons/index.html index 8c35600d..64e16d6a 100644 --- a/v0.4.3/examples/c-comparisons/index.html +++ b/v0.4.3/examples/c-comparisons/index.html @@ -1,5 +1,460 @@ -Binary Classification with Laplace approximation · ApproximateGPs.jl

Binary Classification with Laplace approximation

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the Laplace approximation.

For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Binary Classification with Laplace approximation · ApproximateGPs.jl
+
+
+
+
+
+

Binary Classification with Laplace approximation

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the Laplace approximation.

For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using LinearAlgebra
 using Distributions
 using LogExpFunctions: logistic, softplus, invsoftplus
@@ -7963,3 +8418,4 @@ 
Package and system information
JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/fsJ6N/src

This page was generated using Literate.jl.

+ diff --git a/v0.4.3/index.html b/v0.4.3/index.html index 3b4d0f81..75ab3e24 100644 --- a/v0.4.3/index.html +++ b/v0.4.3/index.html @@ -1,2 +1,458 @@ -Home · ApproximateGPs.jl

ApproximateGPs.jl

ApproximateGPs.jl is a package that provides some approximate inference algorithms for Gaussian process models, built on top of AbstractGPs.jl.

+Home · ApproximateGPs.jl + + + + + +

ApproximateGPs.jl

ApproximateGPs.jl is a package that provides some approximate inference algorithms for Gaussian process models, built on top of AbstractGPs.jl.

+ diff --git a/v0.4.3/search/index.html b/v0.4.3/search/index.html index cfdea227..d547d37b 100644 --- a/v0.4.3/search/index.html +++ b/v0.4.3/search/index.html @@ -1,2 +1,458 @@ -Search · ApproximateGPs.jl +Search · ApproximateGPs.jl + + + + + + + diff --git a/v0.4.3/userguide/index.html b/v0.4.3/userguide/index.html index 7fbf355e..6684cf0e 100644 --- a/v0.4.3/userguide/index.html +++ b/v0.4.3/userguide/index.html @@ -1,5 +1,460 @@ -User Guide · ApproximateGPs.jl

User Guide

Setup

ApproximateGPs builds on top of AbstractGPs.jl, so all of its features are reexported automatically by ApproximateGPs.

using ApproximateGPs, Random
+User Guide · ApproximateGPs.jl
+
+
+
+
+
+

User Guide

Setup

ApproximateGPs builds on top of AbstractGPs.jl, so all of its features are reexported automatically by ApproximateGPs.

using ApproximateGPs, Random
 rnd = MersenneTwister(1453)  # set a random seed

First, we construct a prior Gaussian process with a Matern-3/2 kernel and zero mean function, and sample some data. More exotic kernels can be constructed using KernelFunctions.jl.

f = GP(Matern32Kernel())
 
 x = rand(rng, 100)
@@ -11,3 +466,4 @@
 
 sva_posterior = posterior(approx)  # Create the approximate posterior

The Evidence Lower Bound (ELBO)

The approximate posterior constructed above will be a very poor approximation, since q was simply chosen to have zero mean and covariance I. A measure of the quality of the approximation is given by the ELBO. Optimising this term with respect to the parameters of q and the inducing input locations z will improve the approximation.

elbo(SparseVariationalApproximation(fz, q), fx, y)

A detailed example of how to carry out such optimisation is given in Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl. For an example of non-conjugate inference, see Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS.

Available Parametrizations

Two parametrizations of q(u) are presently available: Centered and NonCentered. The Centered parametrization expresses q(u) directly in terms of its mean and covariance. The NonCentered parametrization instead parametrizes the mean and covariance of ε := cholesky(cov(u)).U' \ (u - mean(u)). These parametrizations are also known respectively as "Unwhitened" and "Whitened".

The choice of parametrization can have a substantial impact on the time it takes for ELBO optimization to converge, and which parametrization is better in a particular situation is not generally obvious. That being said, the NonCentered parametrization often converges in fewer iterations, so it is the default – it is what is used in all of the examples above.

If you require a particular parametrization, simply use the 3-argument version of the approximation constructor:

SparseVariationalApproximation(Centered(), fz, q)
 SparseVariationalApproximation(NonCentered(), fz, q)

For a general discussion around these two parametrizations, see e.g. [Gorinova]. For a GP-specific discussion, see e.g. section 3.4 of [Paciorek].

+ diff --git a/v0.4.4/api/index.html b/v0.4.4/api/index.html index d05ba75d..7d555a25 100644 --- a/v0.4.4/api/index.html +++ b/v0.4.4/api/index.html @@ -1,2 +1,458 @@ -ApproximateGPs API · ApproximateGPs.jl

ApproximateGPs API

ApproximateGPs.API.approx_lmlFunction
approx_lml(approx::<Approximation>, lfx::LatentFiniteGP, ys)

Compute an approximation to the log of the marginal likelihood (also known as "evidence") under the given approx to the posterior. This approximation can be used to optimise the hyperparameters of lfx.

This should become part of the AbstractGPs API (see JuliaGaussianProcesses/AbstractGPs.jl#221).

source
+ApproximateGPs API · ApproximateGPs.jl + + + + + +

ApproximateGPs API

ApproximateGPs.API.approx_lmlFunction
approx_lml(approx::<Approximation>, lfx::LatentFiniteGP, ys)

Compute an approximation to the log of the marginal likelihood (also known as "evidence") under the given approx to the posterior. This approximation can be used to optimise the hyperparameters of lfx.

This should become part of the AbstractGPs API (see JuliaGaussianProcesses/AbstractGPs.jl#221).

source
+ diff --git a/v0.4.4/api/laplace/index.html b/v0.4.4/api/laplace/index.html index 5dfae6ff..249b087a 100644 --- a/v0.4.4/api/laplace/index.html +++ b/v0.4.4/api/laplace/index.html @@ -1,2 +1,458 @@ -Laplace Approximation · ApproximateGPs.jl

Laplace Approximation

ApproximateGPs.LaplaceApproximationModule.build_laplace_objectiveMethod
build_laplace_objective(build_latent_gp, xs, ys; kwargs...)

Construct a closure that computes the minimisation objective for optimising hyperparameters of the latent GP in the Laplace approximation. The returned closure passes its arguments to build_latent_gp, which must return the LatentGP prior.

Keyword arguments

  • newton_warmstart=true: (default) begin Newton optimisation at the mode of the previous call to the objective
  • newton_callback: called as newton_callback(fnew, cache) after each Newton step
  • newton_maxiter=100: maximum number of Newton steps.
source
+Laplace Approximation · ApproximateGPs.jl + + + + + +

Laplace Approximation

ApproximateGPs.LaplaceApproximationModule.build_laplace_objectiveMethod
build_laplace_objective(build_latent_gp, xs, ys; kwargs...)

Construct a closure that computes the minimisation objective for optimising hyperparameters of the latent GP in the Laplace approximation. The returned closure passes its arguments to build_latent_gp, which must return the LatentGP prior.

Keyword arguments

  • newton_warmstart=true: (default) begin Newton optimisation at the mode of the previous call to the objective
  • newton_callback: called as newton_callback(fnew, cache) after each Newton step
  • newton_maxiter=100: maximum number of Newton steps.
source
+ diff --git a/v0.4.4/api/sparsevariational/index.html b/v0.4.4/api/sparsevariational/index.html index cf830650..eb86bf46 100644 --- a/v0.4.4/api/sparsevariational/index.html +++ b/v0.4.4/api/sparsevariational/index.html @@ -1,4 +1,460 @@ -Sparse Variational Approximation · ApproximateGPs.jl

Sparse Variational Approximation

ApproximateGPs.SparseVariationalApproximationModule.SparseVariationalApproximationMethod
SparseVariationalApproximation(
+Sparse Variational Approximation · ApproximateGPs.jl
+
+
+
+
+
+

Sparse Variational Approximation

ApproximateGPs.SparseVariationalApproximationModule.SparseVariationalApproximationMethod
SparseVariationalApproximation(
     ::Parametrization, fz::FiniteGP, q::AbstractMvNormal
 ) where {Parametrization}

Produce a SparseVariationalApproximation{Parametrization}, which packages the prior over the pseudo-points, fz, and the approximate posterior at the pseudo-points, q, together into a single object.

The Parametrization determines the precise manner in which q and fz are interpreted. Existing parametrizations include Centered and NonCentered.

source
+ diff --git a/v0.4.4/examples/a-regression/index.html b/v0.4.4/examples/a-regression/index.html index 56fbfe33..8fe13b2f 100644 --- a/v0.4.4/examples/a-regression/index.html +++ b/v0.4.4/examples/a-regression/index.html @@ -1,5 +1,460 @@ -Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl · ApproximateGPs.jl

A recreation of https://gpflow.readthedocs.io/en/master/notebooks/advanced/gps_for_big_data.html

Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


In this example, we show how to construct and train the stochastic variational Gaussian process (SVGP) model for efficient inference in large scale datasets. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl · ApproximateGPs.jl
+
+
+
+
+
+

A recreation of https://gpflow.readthedocs.io/en/master/notebooks/advanced/gps_for_big_data.html

Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


In this example, we show how to construct and train the stochastic variational Gaussian process (SVGP) model for efficient inference in large scale datasets. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using Distributions
 using LinearAlgebra
 
@@ -134,3 +589,4 @@ 
Package and system information
JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/fsJ6N/src

This page was generated using Literate.jl.

+ diff --git a/v0.4.4/examples/b-classification/index.html b/v0.4.4/examples/b-classification/index.html index 58393599..c5a376b6 100644 --- a/v0.4.4/examples/b-classification/index.html +++ b/v0.4.4/examples/b-classification/index.html @@ -1,5 +1,460 @@ -Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS · ApproximateGPs.jl

Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the stochastic variational Gaussian process (SVGP) model. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS · ApproximateGPs.jl
+
+
+
+
+
+

Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the stochastic variational Gaussian process (SVGP) model. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using ParameterHandling
 using Zygote
 using Distributions
@@ -416,3 +871,4 @@ 
Package and system information
JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/fsJ6N/src

This page was generated using Literate.jl.

+ diff --git a/v0.4.4/examples/c-comparisons/index.html b/v0.4.4/examples/c-comparisons/index.html index 017b04e5..4bc71501 100644 --- a/v0.4.4/examples/c-comparisons/index.html +++ b/v0.4.4/examples/c-comparisons/index.html @@ -1,5 +1,460 @@ -Binary Classification with Laplace approximation · ApproximateGPs.jl

Binary Classification with Laplace approximation

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the Laplace approximation.

For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Binary Classification with Laplace approximation · ApproximateGPs.jl
+
+
+
+
+
+

Binary Classification with Laplace approximation

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the Laplace approximation.

For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using LinearAlgebra
 using Distributions
 using LogExpFunctions: logistic, softplus, invsoftplus
@@ -622,3 +1077,4 @@ 
Package and system information
JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/fsJ6N/src

This page was generated using Literate.jl.

+ diff --git a/v0.4.4/index.html b/v0.4.4/index.html index 1e47515e..bf95b91c 100644 --- a/v0.4.4/index.html +++ b/v0.4.4/index.html @@ -1,2 +1,458 @@ -Home · ApproximateGPs.jl

ApproximateGPs.jl

ApproximateGPs.jl is a package that provides some approximate inference algorithms for Gaussian process models, built on top of AbstractGPs.jl.

+Home · ApproximateGPs.jl + + + + + +

ApproximateGPs.jl

ApproximateGPs.jl is a package that provides some approximate inference algorithms for Gaussian process models, built on top of AbstractGPs.jl.

+ diff --git a/v0.4.4/search/index.html b/v0.4.4/search/index.html index c6ca9387..40a8f06d 100644 --- a/v0.4.4/search/index.html +++ b/v0.4.4/search/index.html @@ -1,2 +1,458 @@ -Search · ApproximateGPs.jl +Search · ApproximateGPs.jl + + + + + + + diff --git a/v0.4.4/userguide/index.html b/v0.4.4/userguide/index.html index e1618634..a6cca5e6 100644 --- a/v0.4.4/userguide/index.html +++ b/v0.4.4/userguide/index.html @@ -1,5 +1,460 @@ -User Guide · ApproximateGPs.jl

User Guide

Setup

ApproximateGPs builds on top of AbstractGPs.jl, so all of its features are reexported automatically by ApproximateGPs.

using ApproximateGPs, Random
+User Guide · ApproximateGPs.jl
+
+
+
+
+
+

User Guide

Setup

ApproximateGPs builds on top of AbstractGPs.jl, so all of its features are reexported automatically by ApproximateGPs.

using ApproximateGPs, Random
 rnd = MersenneTwister(1453)  # set a random seed

First, we construct a prior Gaussian process with a Matern-3/2 kernel and zero mean function, and sample some data. More exotic kernels can be constructed using KernelFunctions.jl.

f = GP(Matern32Kernel())
 
 x = rand(rng, 100)
@@ -11,3 +466,4 @@
 
 sva_posterior = posterior(approx)  # Create the approximate posterior

The Evidence Lower Bound (ELBO)

The approximate posterior constructed above will be a very poor approximation, since q was simply chosen to have zero mean and covariance I. A measure of the quality of the approximation is given by the ELBO. Optimising this term with respect to the parameters of q and the inducing input locations z will improve the approximation.

elbo(SparseVariationalApproximation(fz, q), fx, y)

A detailed example of how to carry out such optimisation is given in Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl. For an example of non-conjugate inference, see Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS.

Available Parametrizations

Two parametrizations of q(u) are presently available: Centered and NonCentered. The Centered parametrization expresses q(u) directly in terms of its mean and covariance. The NonCentered parametrization instead parametrizes the mean and covariance of ε := cholesky(cov(u)).U' \ (u - mean(u)). These parametrizations are also known respectively as "Unwhitened" and "Whitened".

The choice of parametrization can have a substantial impact on the time it takes for ELBO optimization to converge, and which parametrization is better in a particular situation is not generally obvious. That being said, the NonCentered parametrization often converges in fewer iterations, so it is the default – it is what is used in all of the examples above.

If you require a particular parametrization, simply use the 3-argument version of the approximation constructor:

SparseVariationalApproximation(Centered(), fz, q)
 SparseVariationalApproximation(NonCentered(), fz, q)

For a general discussion around these two parametrizations, see e.g. [Gorinova]. For a GP-specific discussion, see e.g. section 3.4 of [Paciorek].

+ diff --git a/v0.4.5/api/index.html b/v0.4.5/api/index.html index 6ea02682..8b68b909 100644 --- a/v0.4.5/api/index.html +++ b/v0.4.5/api/index.html @@ -1,2 +1,458 @@ -ApproximateGPs API · ApproximateGPs.jl

ApproximateGPs API

ApproximateGPs.API.approx_lmlFunction
approx_lml(approx::<Approximation>, lfx::LatentFiniteGP, ys)

Compute an approximation to the log of the marginal likelihood (also known as "evidence") under the given approx to the posterior. This approximation can be used to optimise the hyperparameters of lfx.

This should become part of the AbstractGPs API (see JuliaGaussianProcesses/AbstractGPs.jl#221).

source
+ApproximateGPs API · ApproximateGPs.jl + + + + + +

ApproximateGPs API

ApproximateGPs.API.approx_lmlFunction
approx_lml(approx::<Approximation>, lfx::LatentFiniteGP, ys)

Compute an approximation to the log of the marginal likelihood (also known as "evidence") under the given approx to the posterior. This approximation can be used to optimise the hyperparameters of lfx.

This should become part of the AbstractGPs API (see JuliaGaussianProcesses/AbstractGPs.jl#221).

source
+ diff --git a/v0.4.5/api/laplace/index.html b/v0.4.5/api/laplace/index.html index 33c72214..8581407e 100644 --- a/v0.4.5/api/laplace/index.html +++ b/v0.4.5/api/laplace/index.html @@ -1,2 +1,458 @@ -Laplace Approximation · ApproximateGPs.jl

Laplace Approximation

ApproximateGPs.LaplaceApproximationModule.build_laplace_objectiveMethod
build_laplace_objective(build_latent_gp, xs, ys; kwargs...)

Construct a closure that computes the minimisation objective for optimising hyperparameters of the latent GP in the Laplace approximation. The returned closure passes its arguments to build_latent_gp, which must return the LatentGP prior.

Keyword arguments

  • newton_warmstart=true: (default) begin Newton optimisation at the mode of the previous call to the objective
  • newton_callback: called as newton_callback(fnew, cache) after each Newton step
  • newton_maxiter=100: maximum number of Newton steps.
source
+Laplace Approximation · ApproximateGPs.jl + + + + + +

Laplace Approximation

ApproximateGPs.LaplaceApproximationModule.build_laplace_objectiveMethod
build_laplace_objective(build_latent_gp, xs, ys; kwargs...)

Construct a closure that computes the minimisation objective for optimising hyperparameters of the latent GP in the Laplace approximation. The returned closure passes its arguments to build_latent_gp, which must return the LatentGP prior.

Keyword arguments

  • newton_warmstart=true: (default) begin Newton optimisation at the mode of the previous call to the objective
  • newton_callback: called as newton_callback(fnew, cache) after each Newton step
  • newton_maxiter=100: maximum number of Newton steps.
source
+ diff --git a/v0.4.5/api/sparsevariational/index.html b/v0.4.5/api/sparsevariational/index.html index 88c3e16f..a8fff03e 100644 --- a/v0.4.5/api/sparsevariational/index.html +++ b/v0.4.5/api/sparsevariational/index.html @@ -1,4 +1,460 @@ -Sparse Variational Approximation · ApproximateGPs.jl

Sparse Variational Approximation

ApproximateGPs.SparseVariationalApproximationModule.SparseVariationalApproximationMethod
SparseVariationalApproximation(
+Sparse Variational Approximation · ApproximateGPs.jl
+
+
+
+
+
+

Sparse Variational Approximation

ApproximateGPs.SparseVariationalApproximationModule.SparseVariationalApproximationMethod
SparseVariationalApproximation(
     ::Parametrization, fz::FiniteGP, q::AbstractMvNormal
 ) where {Parametrization}

Produce a SparseVariationalApproximation{Parametrization}, which packages the prior over the pseudo-points, fz, and the approximate posterior at the pseudo-points, q, together into a single object.

The Parametrization determines the precise manner in which q and fz are interpreted. Existing parametrizations include Centered and NonCentered.

source
+ diff --git a/v0.4.5/examples/a-regression/index.html b/v0.4.5/examples/a-regression/index.html index 38cf90ae..febfbb47 100644 --- a/v0.4.5/examples/a-regression/index.html +++ b/v0.4.5/examples/a-regression/index.html @@ -1,5 +1,460 @@ -Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl · ApproximateGPs.jl

A recreation of https://gpflow.readthedocs.io/en/master/notebooks/advanced/gps_for_big_data.html

Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


In this example, we show how to construct and train the stochastic variational Gaussian process (SVGP) model for efficient inference in large scale datasets. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl · ApproximateGPs.jl
+
+
+
+
+
+

A recreation of https://gpflow.readthedocs.io/en/master/notebooks/advanced/gps_for_big_data.html

Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


In this example, we show how to construct and train the stochastic variational Gaussian process (SVGP) model for efficient inference in large scale datasets. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using Distributions
 using LinearAlgebra
 
@@ -134,3 +589,4 @@ 
Package and system information
JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/e8FS0/src

This page was generated using Literate.jl.

+ diff --git a/v0.4.5/examples/b-classification/index.html b/v0.4.5/examples/b-classification/index.html index 69286c2e..63ab8c7d 100644 --- a/v0.4.5/examples/b-classification/index.html +++ b/v0.4.5/examples/b-classification/index.html @@ -1,5 +1,460 @@ -Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS · ApproximateGPs.jl

Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the stochastic variational Gaussian process (SVGP) model. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS · ApproximateGPs.jl
+
+
+
+
+
+

Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the stochastic variational Gaussian process (SVGP) model. For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using ParameterHandling
 using Zygote
 using Distributions
@@ -416,3 +871,4 @@ 
Package and system information
JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/e8FS0/src

This page was generated using Literate.jl.

+ diff --git a/v0.4.5/examples/c-comparisons/index.html b/v0.4.5/examples/c-comparisons/index.html index f62eecdf..c7ba1232 100644 --- a/v0.4.5/examples/c-comparisons/index.html +++ b/v0.4.5/examples/c-comparisons/index.html @@ -1,5 +1,460 @@ -Binary Classification with Laplace approximation · ApproximateGPs.jl

Binary Classification with Laplace approximation

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the Laplace approximation.

For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
+Binary Classification with Laplace approximation · ApproximateGPs.jl
+
+
+
+
+
+

Binary Classification with Laplace approximation

You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.


This example demonstrates how to carry out non-conjugate Gaussian process inference using the Laplace approximation.

For a basic introduction to the functionality of this library, please refer to the User Guide.

Setup

using ApproximateGPs
 using LinearAlgebra
 using Distributions
 using LogExpFunctions: logistic, softplus, invsoftplus
@@ -622,3 +1077,4 @@ 
Package and system information
JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/e8FS0/src

This page was generated using Literate.jl.

+ diff --git a/v0.4.5/index.html b/v0.4.5/index.html index a5e814b4..344e8c44 100644 --- a/v0.4.5/index.html +++ b/v0.4.5/index.html @@ -1,2 +1,458 @@ -Home · ApproximateGPs.jl

ApproximateGPs.jl

ApproximateGPs.jl is a package that provides some approximate inference algorithms for Gaussian process models, built on top of AbstractGPs.jl.

+Home · ApproximateGPs.jl + + + + + +

ApproximateGPs.jl

ApproximateGPs.jl is a package that provides some approximate inference algorithms for Gaussian process models, built on top of AbstractGPs.jl.

+ diff --git a/v0.4.5/search/index.html b/v0.4.5/search/index.html index 154a4817..9e5b43e7 100644 --- a/v0.4.5/search/index.html +++ b/v0.4.5/search/index.html @@ -1,2 +1,458 @@ -Search · ApproximateGPs.jl +Search · ApproximateGPs.jl + + + + + + + diff --git a/v0.4.5/userguide/index.html b/v0.4.5/userguide/index.html index 9cc599e6..2910f943 100644 --- a/v0.4.5/userguide/index.html +++ b/v0.4.5/userguide/index.html @@ -1,5 +1,460 @@ -User Guide · ApproximateGPs.jl

User Guide

Setup

ApproximateGPs builds on top of AbstractGPs.jl, so all of its features are reexported automatically by ApproximateGPs.

using ApproximateGPs, Random
+User Guide · ApproximateGPs.jl
+
+
+
+
+
+

User Guide

Setup

ApproximateGPs builds on top of AbstractGPs.jl, so all of its features are reexported automatically by ApproximateGPs.

using ApproximateGPs, Random
 rnd = MersenneTwister(1453)  # set a random seed

First, we construct a prior Gaussian process with a Matern-3/2 kernel and zero mean function, and sample some data. More exotic kernels can be constructed using KernelFunctions.jl.

f = GP(Matern32Kernel())
 
 x = rand(rng, 100)
@@ -11,3 +466,4 @@
 
 sva_posterior = posterior(approx)  # Create the approximate posterior

The Evidence Lower Bound (ELBO)

The approximate posterior constructed above will be a very poor approximation, since q was simply chosen to have zero mean and covariance I. A measure of the quality of the approximation is given by the ELBO. Optimising this term with respect to the parameters of q and the inducing input locations z will improve the approximation.

elbo(SparseVariationalApproximation(fz, q), fx, y)

A detailed example of how to carry out such optimisation is given in Regression: Sparse Variational Gaussian Process for Stochastic Optimisation with Flux.jl. For an example of non-conjugate inference, see Classification: Sparse Variational Approximation for Non-Conjugate Likelihoods with Optim's L-BFGS.

Available Parametrizations

Two parametrizations of q(u) are presently available: Centered and NonCentered. The Centered parametrization expresses q(u) directly in terms of its mean and covariance. The NonCentered parametrization instead parametrizes the mean and covariance of ε := cholesky(cov(u)).U' \ (u - mean(u)). These parametrizations are also known respectively as "Unwhitened" and "Whitened".

The choice of parametrization can have a substantial impact on the time it takes for ELBO optimization to converge, and which parametrization is better in a particular situation is not generally obvious. That being said, the NonCentered parametrization often converges in fewer iterations, so it is the default – it is what is used in all of the examples above.

If you require a particular parametrization, simply use the 3-argument version of the approximation constructor:

SparseVariationalApproximation(Centered(), fz, q)
 SparseVariationalApproximation(NonCentered(), fz, q)

For a general discussion around these two parametrizations, see e.g. [Gorinova]. For a GP-specific discussion, see e.g. section 3.4 of [Paciorek].

+