Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce computation and non-deterministic behaviour in evidence and sampling tests #1475

Open
dilpath opened this issue Sep 30, 2024 · 0 comments

Comments

@dilpath
Copy link
Member

dilpath commented Sep 30, 2024

Continuing on #1461 / #1473
It looks like there is unnecessary computation in the evidence tests. There are optimize.minimize and sample.sample calls there. These could be done once locally and the samples stored, to speed up the tests and remove most sources of non-deterministic behavior in the tests. The stored samples could also be reused in test_samples_ci, test_ground_truth, test_autocorrelation_pipeline too, since sample generation is already tested in test_pipeline.

The remaining non-deterministic thing would be in the bridge sampling itself, e.g.

proposal_samples = np.random.normal(
loc=posterior_mean,
scale=np.sqrt(posterior_cov),
size=n_proposal_samples,
)

This could be resolved with a random seed, ideally like

def bridge_sampling_log_evidence(..., rng: np.random.Generator):
    ...
    proposal_samples = rng.normal(
    ...

so that users can specify a random seed via numpy's newer/recommended rng.

There are some other things that aren't clear to me:

  • the same problem isn't used in each evidence computation test -- why not?
  • why is the "real" evidence only used in test_harmonic_mean_log_evidence, but not in test_bridge_sampling?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant