-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix#180 #184
Fix#180 #184
Conversation
When/if this has been merged, we might wanna look at issue #95 again. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good addition overall, appears to work as intended (I have also tested it on the Branin-Hoo system at varying degrees of noise). I'd mainly like to see the changes related to the pass-by-reference comment I made in the test code, everything else is just formatting nice-to-have.
res_x, [res_y, res_std_no_white] = expected_minimum(res, return_std=True) | ||
#Add moddeled experimental noise | ||
opt.add_modelled_noise() | ||
res_noise = opt.get_result() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remember that python works with pass-by-reference, and with the way you have implemented add_modelled_noise your original res (line 374) is changed when you call it. In this test, res and res_noise point to the same underlying object and you will get the same result in line 387 by calling expected_minmum(res, return_std=True)
.
I think it would be better if the test here makes this clear by removing line 378 and explicitly calling the same optimize result in both calls of expected minimum.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to do a copy of the entire optimizer in this case?
In details: train the optimizer on the data. Make a copy of the optimizer; call it opt_noisy. Add the noise to apt_noisy. Then pull the std from the expected minima?
And now that I write this... shouldn't we check that the minima are identical in x and y (?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And now that I write this... shouldn't we check that the minima are identical in x and y (?)
Well, the res_x location in your test is probably quite sensitive to the specific simulated data points (as there isn't an optimum) so I wouldn't want to add it for that reason alone.
More importantly, with the way our model is defined, the WhiteNoise is simply a constant increase of the uncertainty with the same value across the whole parameter space. Because of this, by definition it cannot change the location of the expected minimum in mean values. If the WhiteNoise is very large it may change the location in practice as a consequence of using sampling to find the expected minimum, but it doesn't add any information and so should be left out.
I absolutely agree. The noise level bounds refer to normalized y-values, and in this scale the present lower limit of 1E-5 is an absurdly unrealistic lower limit for noise in real data. Still, whether to change it is not so black/white though, as if your model ends in that value for the noise, it is at least very easy to have the code flag that it is an unrealistic scenario... |
This is an attempt to allow users to add or remove the modelled experimental noise. I'm balancing between not wanting to change the "inner mechanics" of our fitting while also wanting to expose the knowledge of the experimental noise to the user. Currently, experimental noise is set to zero between each iteration with reference to chapter 2 in Rasmussen and Williams. Link.
With this "fix" the workflow could now be:
tell()
)ask()
(a.k.a. 'next_x') has already been stored and is ready to be served to the user.So in total: the user will need to call
opt.add_modelled_noise()
to go from regression to prediction and would need to callopt.remove_modelled_noise()
to return old functionality. Alternative, the user will just need to add one additional datapoint to return to old functionality/behaviour.