Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Option to change the NP values for toys #1426

Closed
annmwang opened this issue Apr 30, 2021 · 3 comments · Fixed by #1610
Closed

Option to change the NP values for toys #1426

annmwang opened this issue Apr 30, 2021 · 3 comments · Fixed by #1610
Assignees
Labels
user request Request coming form a pyhf user

Comments

@annmwang
Copy link

annmwang commented Apr 30, 2021

Description

Hi all! I'm trying to run some toys (thank you so much for implementing it by the way!) and I was wondering if it would be possible to let the user choose to use what values the nuisance parameter are fixed to for the sampling part of the toy generation. I'm referring to

signal_pars = self.pdf.config.suggested_init()

This is motivated by the fact that I was looking through the ATLAS frequentist recommendations here and I wanted to test if there was any difference using the conditional MLE values instead of the initial expected NP values. Would it be possible to implement this as an option for the user?

Thanks to Giordon for some discussions and clarification on this issue.

@matthewfeickert matthewfeickert added the user request Request coming form a pyhf user label Apr 30, 2021
@matthewfeickert
Copy link
Member

This is motivated by the fact that I was looking through the ATLAS frequentist recommendations here and I wanted to test if there was any difference using the conditional MLE values instead of the initial expected NP values. Would it be possible to implement this as an option for the user?

The decision to perform the test statistic calculations using model parameters determined from the expectation or a fit of the model to data is ultimately that — a decision / choice. At the moment this is abstracted away to an implementation decision, but I think it seems reasonable to expose this choice through a kwarg. @lukasheinrich @kratsg you'd agree?

@masonproffitt
Copy link
Contributor

masonproffitt commented Sep 24, 2021

I'm going to argue that this isn't a choice and that the current implementation is just wrong. In other words, that this is a bug.

Model.config.suggested_init() is not an expectation in any sense; if it happens to align with expected values, this is by coincidence in the particular model. The initial parameter values don't have any intrinsic special significance, and can be effectively nonsense in the case of many data-driven backgrounds. The only conditions for the initial parameters is that they need to be within the allowed range and that they are sufficient to converge in a minimum NLL fit. Even in the standard RooStats implementation for toys, FrequentistCalculator, it's clearly stated that "the nuisance parameters are fixed to their MLEs." If you want to get pre-fit expected values, you should be passing in Model.expected_data(pars) with the set of "expected" parameter values. Otherwise, by already having data to pass to ToyCalculator, you are implicitly constraining the nuisance parameters, so that constraint should be taken into account.

Perhaps most importantly, without setting the NPs to their MLEs, the results from ToyCalculator can be completely different from the results of AsymptoticCalculator even in the large statistics limit, which seems contradictory to the idea of an asymptotic approximation... By analogy to the asymptotic case, this would be like not using the Asimov data set to get the expected test statistic and instead generating Model.expected_data() with the initial parameters.

@matthewfeickert
Copy link
Member

@annmwang now that @masonproffitt's PR has been merged, until we can get a new release out with that in it can you please start testing things out in advance by installing the development release?

$ python -m pip install --upgrade pip setuptools wheel
$ python -m pip install --upgrade pyhf  # Get stable versions of pyhf dependencies
$ python -m pip install --upgrade --extra-index-url https://test.pypi.org/simple/ --pre pyhf  # Get the dev release from TestPyPI

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
user request Request coming form a pyhf user
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants