Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduction to Bayesian A/B Testing notebook produces SamplingError in version 5.16.2 #708

Open
mikeWShef opened this issue Sep 22, 2024 · 0 comments

Comments

@mikeWShef
Copy link

mikeWShef commented Sep 22, 2024

Notebook title:Introduction to Bayesian A/B Testing
Notebook url:https://github.com/pymc-devs/pymc-examples/blob/main/examples/causal_inference/bayesian_ab_testing_introduction.ipynb

Issue description

Running the notebook produces:

A lot of:

pytensor\tensor\elemwise.py:763: RuntimeWarning: invalid value encountered in log
variables = ufunc(*ufunc_args, **ufunc_kwargs)

Then:

Cell In[14], line 1
----> 1 trace_weak, trace_strong = run_scenario_twovariant(
      2     variants=["A", "B"],
      3     true_rates=[0.23, 0.23],
      4     samples_per_variant=100000,
      5     weak_prior=BetaPrior(alpha=100, beta=100),
      6     strong_prior=BetaPrior(alpha=10000, beta=10000),
      7 )

Cell In[13], line 11, in run_scenario_twovariant(variants, true_rates, samples_per_variant, weak_prior, strong_prior)
      9 data = [BinomialData(**generated[v].to_dict()) for v in variants]
     10 with ConversionModelTwoVariant(priors=weak_prior).create_model(data):
---> 11     trace_weak = pm.sample(draws=5000)
     12 with ConversionModelTwoVariant(priors=strong_prior).create_model(data):
     13     trace_strong = pm.sample(draws=5000)

File ~\anaconda3\envs\pymc_env\Lib\site-packages\pymc\sampling\mcmc.py:776, in sample(draws, tune, chains, cores, random_seed, progressbar, progressbar_theme, step, var_names, nuts_sampler, initvals, init, jitter_max_retries, n_init, trace, discard_tuned_samples, compute_convergence_checks, keep_warning_stat, return_inferencedata, idata_kwargs, nuts_sampler_kwargs, callback, mp_ctx, blas_cores, model, **kwargs)
    774 ip: dict[str, np.ndarray]
    775 for ip in initial_points:
--> 776     model.check_start_vals(ip)
    777     _check_start_shape(model, ip)
    779 if var_names is not None:

File ~\anaconda3\envs\pymc_env\Lib\site-packages\pymc\model\core.py:1793, in Model.check_start_vals(self, start)
   1790 initial_eval = self.point_logps(point=elem)
   1792 if not all(np.isfinite(v) for v in initial_eval.values()):
-> 1793     raise SamplingError(
   1794         "Initial evaluation of model at starting point failed!\n"
   1795         f"Starting values:\n{elem}\n\n"
   1796         f"Logp initial evaluation results:\n{initial_eval}\n"
   1797         "You can call `model.debug()` for more details."
   1798     )

SamplingError: Initial evaluation of model at starting point failed!
Starting values:
{'p_logodds__': array([nan, nan])}

Logp initial evaluation results:
{'p': nan, 'y': -inf}
You can call `model.debug()` for more details.

ALSO:
strong/weak_prior name in function run_scenario_twovariant shadows a variabe with a different type in the outer scope. It's probably worth fixing this for readability.

Proposed solution

I don't know how to solve main problem

Suggest changing strong/weak_prior to strong/weak_prior_model in outer scope.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant