-
-
Notifications
You must be signed in to change notification settings - Fork 393
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added support for PyJAGS in ArviZ #1219
Conversation
- reformatted code
With respect to the Pylint failure: StackOverflow at https://stackoverflow.com/a/54045715/4417567 and existing arviz code such as
name parameter to fixture , but it seems to be rather more complicated.
|
Hi, thanks for the PR. I think For e.g https://github.com/arviz-devs/arviz/blob/master/arviz/data/io_pystan.py Create a converter class
Then each group is handled with a class method which return xarray datasets
Then finally a method to transform class to InferenceData
To extract library specific data can use "external" functions as is the case in PyStan which return dictionaries containing data
|
Also, notice that PyStan returns also the warmup iterations, but other libs can return only posterior samples. So the structure might be a bit different. |
- removed dependence on az.convert_to_inference_data
- added support for prior distribution
Thank you for the input Ari! As you suspected, the solution looks slightly different from the PyStan implementation. All that JAGS sample returns is a dictionary mapping variable names to chains. There is no fit object as in PyStan. Warm-up iterations can be handled by drawing samples and simply not saving them and then drawing the actual samples that are kept. With ArviZ it is now possible to simply draw both warm-up and actual samples in one step using PyJAGS and letting the from_pyjags function know how many of those were warmup samples. |
- changed order of imports
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good.
Added some comments
…ples and get_draws - removed unused code
We are rerunning all notebooks in #1217 so this should be no problem. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
given that there is the try except block when importing pyjags, do we want to maybe test with pyjags installed on latest external build and no pyjags in special?
I can help with Azure to do this and also adding even another test to check library attrs are correctly stored.
- removed assert to type check for tuple
Co-authored-by: Oriol Abril <oriol.abril.pla@gmail.com>
Co-authored-by: Oriol Abril <oriol.abril.pla@gmail.com>
Sorry, big fingers on mobile |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, I don't want to block merging.
I am still curious about warmup_prior
existence, does this mean that jags does not use forward sampling to get prior samples? Also why are posterior and prior warmups the same?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor nits on documentation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
At some point I think we should use pyjags to create a prior and posterior for tests.
But that can be done later.
Co-authored-by: Oriol Abril <oriol.abril.pla@gmail.com>
Co-authored-by: Oriol Abril <oriol.abril.pla@gmail.com>
Good question. As far as I can see JAGS itself has no functionality to explicitly sample from the prior. However one can simply create a model that leaves out the likelihood (and doesn't use any data). I have included such a separate prior model in the eight schools example in the inference data cookbook. Theoretically, one could directly sample hierarchically from the graph of (hyper)priors. One would not need to resort to Metropolis-Hastings (or even Gibbs) sampling. But I do not know whether JAGS realized that and samples directly. The possibility of warmup iterations seems like more robust move. More modern MCMC packages such as PyMC3 have features to sample from all kinds of distributions associated with Bayesian analysis including predictive distributions and so it makes sense for ArviZ to support those. JAGS is basically a Gibbs sampler that draws from a target distribution described by a graphical model. It is not a Turing complete language like Stan that lets you generate quantities and solve ODEs while sampling. |
This pull request adds functionality to translate posterior samples generated by PyJAGS to an ArviZ inference data object via a new public function 'from_pyjags'. A set of unit tests checks that a round trip translation from PyJAGS to ArviZ and back retains the same information. A new section has been added to the documentation notebook illustrating the use ArviZ in conjunction with PyJAGS.
Description
Checklist