-
-
Notifications
You must be signed in to change notification settings - Fork 30
FAQ
What, if any, is the semantic difference between, say,
m1 = @model begin
x ~ For(n) do i
Normal()
end
end
and
m2 = @model n begin
x ~ For(n) do i
Normal()
end
end
Models are like functions. In this case, m1
is a closure, so the n
in the body comes from the scope in which the model was defined, just as For
and Normal
are defined.
In m2
, n
is an argument, just like the argument of a function, so it's easy to evaluate it on different values of n
.
Consider the models
m1 = @model begin
p = 1
x ~ Gamma(p)
end
and
m2 = @model begin
x ~ Gamma(p)
end
What is the semantic difference between m1
and m2(p = 1)
? More generally, what's the difference between a model and a joint distribution? Is it right to think of a model without unbound parameters as a joint distribution?
Yes, that's the current approach.
Shouldn't, in general, model(kwargs...)
produce another model, which might or might not be a joint distribution?
Is this a question about the possibility of partial application? We could definitely be doing better with that. I don't think it's hard, just a matter of thinking through the different methods.
For m1
as in the previous question, it does not seem to make sense to write m1(x = 1)
(samples of such joint distribution will not necessarily have x == 1). Why?
The current approach (which may change) is that you can pass any arguments you like to a model, but it will ignore anything that's not an argument. This can be handy once you get the hang of it, because it puts the focus on getting the model right, rather than having to worry so much about manipulating the arguments (usually a named tuple). I'm open to discussing this if you think another approach is better.
Relatedly: consider the models
m1 = @model begin
p = 1
end
and
m2 = @model begin
p ~ Uniform()
x ~ Gamma(p)
end
Shouldn't merge(m2, m1)
and m2(p = 1)
be semantically identical?
This goes back to a previous question; p
is not an argument of m2
, so currently it's ignored.