Finding p(θ|D) in bo torch #2890
-
Hello, I am very new to Bayesian optimization. I am trying to sample the posterior probability distribution of a set of hyperparameters for some given data {p(θ|D) where θ is the hyperparameters of the model and D is the training data}. From my understanding: p(θ|D) = p(D|θ)p(θ). Where p(D|θ) is the likelihood function given by ExactMarginalLogLikelihood on a single task GP model and p(θ) is some prior (I am using a uniform distribution). My sampler is exhibiting some strange behavior, so I wanted to check if there were issues with my botorch code snippet. Does the following code setup the p(θ|D) distribution correctly? If not, what should I change?
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Sorry about the late response here. What is the strange behavior that your sampler exhibits? One thing to note is that You can either use |
Beta Was this translation helpful? Give feedback.
Sorry about the late response here. What is the strange behavior that your sampler exhibits?
One thing to note is that
SingleTaskGP
by default applies aStandardize
outcome transform (https://github.com/pytorch/botorch/blob/e7fef3bfa235b468af7ad43a3bd38aa690a161f9/botorch/models/gp_regression.py#L147-L153), somodel(train_x)
returns the posterior distribution in the transformed rather than the original space.You can either use
model.posterior(train_x)
to compute the posterior in the untransformed space, or you can construct the model without aStandardize
transform by settingoutcome_transform=None
when instantiating theSingleTaskGP
object.