Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with special casing std_dev = 0 #283

Open
jagerber48 opened this issue Dec 31, 2024 · 12 comments
Open

Problem with special casing std_dev = 0 #283

jagerber48 opened this issue Dec 31, 2024 · 12 comments

Comments

@jagerber48
Copy link
Contributor

This is a subtle problem that has been an Achilles heel of #262. On master branch we have

from uncertainties import ufloat

zero = ufloat(0, 0)
p = ufloat(0.3, 0.01)
assert (zero ** p).n == 0
assert (zero ** p).s == 0

Let f=x**y. In this case with x==0 and non-integer y we would expect df/dx to be nan. Indeed this is what is returned by pow_deriv_0 in ops.py (the function responsible for this math) https://github.com/lmfit/uncertainties/blob/master/uncertainties/ops.py#L137.
We would expect df/dy to be 0 in this case and this is what is returned by pow_deriv_1. So I would expect the uncertainty to be

sqrt((df/dy * 0.01)**2 + (df/dx * 0)**2)

But since df/dx is nan the whole equation will be converted to nan. So, I would expect (zero**p).s to have a nan uncertainty. However, we can see that it does not. The reason it does not is because of this check in AffineScalarFunc.error_components at line 509

for variable, derivative in self.derivatives.items():
# print "TYPE", type(variable), type(derivative)
# Individual standard error due to variable:
# 0 is returned even for a NaN derivative (in this case no
# multiplication by the derivative is performed): an exact
# variable obviously leads to no uncertainty in the
# functions that depend on it.
if variable._std_dev == 0:
# !!! Shouldn't the errors always be floats, as a
# convention of this module?
error_components[variable] = 0
else:
error_components[variable] = abs(derivative * variable._std_dev)
return error_components

Basically if any variable has zero std dev then it is excluded from the uncertainty calculation. My read of this is that uncertainties has attempted to treat UFloat with zero std dev as a float.

This has been... tricky.. to implement in #262. I was able to get it at one point, but it required a lot of special casing in various places. However, thinking about it so much has led me to discover what I would call a bug.


from math import isnan

from uncertainties import ufloat

zero = ufloat(0, 0)
p = ufloat(0.3, 0.01)
assert (zero ** p).n == 0
assert (zero ** p).s == 0

x = ufloat(1, 0.1)
x_zero = x-x
assert x_zero == zero
assert (x_zero ** p).n == 0
assert isnan((x_zero ** p).s)

In the latter case we see x_zero ** p has nan std_dev despite x_zero == zero. The problem is the catch on variables with zero std_dev only works on Variable. But x_zero, is not a Variable, it is an AffineScalarFunc. We also can't compute the std_dev of AffineScalarFunc mid-computation because this is the costly step that would spoil the O(N) lazy linear combination expansion algorithm.


If I eliminate that check then all tests still pass except for one test looking at pow(zero, p) and another looking at umath.pow(zero, p).


I do not like treating UFloat with zero std_dev as a float. It requires some awkward special-casing in the code and in the tests. I do not think it will be possible to get (x_zero**p).s to equal 0. So I don't think (zero**p).s should be zero. I think it is ok if it is nan. I'm not sure if docs hint that UFloat with zero std_dev should behave like a float, but if it is we could remove those hints. If users somehow depend on the behavior in this edge case, or they're really so surprised about it then I would say they can open an issue. But there are other surprising behaviors, for example #92.

@wshanks
Copy link
Collaborator

wshanks commented Dec 31, 2024

Sorry, I am not able to keep up with all the code changes and discussion recently (I do read them quickly but I don't feel like I have enough time to give good feedback).

It seems like the issue here is to find a balance between simplicity and correctness. Not special-casing 0 standard deviation for Variable would make its behavior more consistent with UFloat. That seems like the right thing to do if we do not want to add more complexity to the code. On the other hand, 0's and nans are a bit of an edge case, so I could see trying to handle them specially. It does get messy though to check for 0 everywhere before dividing by an uncertainty.

When I first read this issue, I thought it was going to relate more to #278 and the behavior of UFloats with std_dev = 0 in relation to floats. For both cases, we should reconsider how interchangeable a UFloat is with a float. Is it valuable for a UFloat to compare equal a float when its uncertainty is zero? If not, does it compare unequal or raise an exception? Likewise for greater than/less. These behaviors related back to the issue here because the question is also if a UFloat with std_dev = 0 should compare equal to a float after both go through a pow() operation.

@jagerber48
Copy link
Contributor Author

Yes, your comparison with #278 is apt. It hadn't specifically occurred to me but I think these are two of the main cases where attempts to make UFloat behave like float are just very awkward and don't actually improve the API.

It seems like the issue here is to find a balance between simplicity and correctness.

Are you suggesting that it would be more correct for pow(0+/-0, 0.1+/-0.03) to be 0+/-0 than it is for it to be 0+/-nan? For example pow(0+/-0.1, 0.1+/-0.03) gives 0+/-nan without fuss.

I think it is nice for UFloat to behave like float in that you can perform many of the same mathematical operations on both (arithmetic, trig functions, special functions, etc.) and also string formatting. But I think it is going too far to have any UFloat with zero std_dev behave exactly like that corresponding float. Especially given the fact that we can't quickly determine whether a UFloat has std_dev==0!

Yes, as part of my vendetta against treating UFloat too much like float, for what you say about comparing UFloat and float with == and > and < I would fall on the side that these comparisons should always raise TypeError.

@newville
Copy link
Member

newville commented Jan 2, 2025

@jagerber48 I think it is inevitable that a UFloat with std_dev=0 will be prone to causing trouble or confusion. The whole point of having a UFloat is that std_dev is not 0.

Handling this is an implementation choice, and backward compatibility would be nice but is not necessary.

I think it is OK that std_dev=0 (or std_dev that is not positive) might sometimes be treated as a plain float.
I also think that it is OK if calculations with such UFloats cause NaNs.

Without looking at the details of the new code in #262, a few possibilities come to mind:

  1. calling ufloat() with std_dev=0 returns a plain float: short-circuit the problem.
  2. calling ufloat() with std_dev=0 raises a TypeError: disallow the problem.
  3. a UAtom with std_dev=0 acts like -- essentially is -- a float. Not sure how easy that would be, perhaps option 1 is easier and clearer.
  4. calculations that generate NaNs are the user's problem.

I think I would favor 2 and 4, but this is not a strong opinion.

I also find std_dev=None to be odd. Presumably, this is intended to signal "will set later". I don't see a valid use case. I'd be OK with dropping this option.

FWIW, I would even suggest

def ufloat(nominal_value, std_dev, tag=None):
      if std_dev < 0.0  or abs(std_dev/nominal_value) < 1.e-15:
          raise TypeError('ufloat needs a finite, positive value for `std_dev`')
     ....

This would require that std_dev is:
a) comparable to 0 (not None, nor some other object: I sometimes ask myself "what if the user sends an open socket here?"... and I often answer, "Well, they are in trouble, aren't they" ;) ).
b) is positive.
c) is numerically finite compared to nominal_value. A std_dev 30 orders of magnitude smaller than the value is going to cause Infs and NaNs too.

@jagerber48
Copy link
Contributor Author

@newville thanks for your response, I think it's really helpful. First off, if I try ufloat(1, None) I get a TypeError a few layers down the stack on a < comparison when trying to set Variable.std_dev. So I think, at least lately, that isn't supported. However, it should fail earlier.

Regarding the issue here and more generally UFloat behaving like `float.

  1. calling ufloat() with std_dev=0 returns a plain float: short-circuit the problem.
  2. calling ufloat() with std_dev=0 raises a TypeError: disallow the problem.
  3. a UAtom with std_dev=0 acts like -- essentially is -- a float. Not sure how easy that would be, perhaps option 1 is easier and clearer.
  4. calculations that generate NaNs are the user's problem.

Going through each in detail:

  1. If we want UFloat to behave like float when std_dev=0 then, yes, the easiest approach would be to just return a float. In many cases I think this would work fine, but it would be annoying in cases where you could end up with mixed UFloat/float combinations. For example a numpy array that has some of each. You would run into problems when you try to extract std_dev for the whole array. Likewise, you might use UFloat format specification strings which would fail on a float. This is just too dangerous.
  2. This is tempting. However, I personally have use cases for UFloat with std_dev=0. I have a database of experimentally measured calibration. Historically, just the measured values and not the uncertainties were recorded. But, going forward, I'm starting to also log the uncertainties. The legacy values without uncertainties may be used in the codebase. In these cases I want to treat the values as having zero uncertainty. This may cause me to underestimate uncertainties in downstream results, at least until those values are replaced with values with proper uncertainties. But, in the meantime while I'm maintaining legacy values without uncertainty, it is useful to be able to treat them as UFloat with std_dev=0. Also, it's not so crazy for something like x-x to happen "in the wild" which results in a UFloat with std_dev=0. This just feels a little overly restrictive.
  3. So this is what master branch tries to do now and what I am raising issue with. My two "quibbles" with this approach are (1) it makes the code more complicated since special cases are necessary (2) it makes the documentation and user headspace more cluttered having to remember that std_dev=0 results in different behavior. My major issue with this approach is that we do not, and cannot, rapidly check the std_dev of UFloat/AffineScalarFunc mid-calculation so it's impossible to get consistent results under this strategy. Specifically zero=ufloat(0, 0) may behave different from x_zero=x-x if x=ufloat(1, 0.1).
  4. This is what I would advocate for. I would phrase it as: UFloat with std_dev=0 gets treated the same as any other UFloat. The downside is that you will get nan in some places where you would not get nan if you replaced UFloat(some_float, 0) with some_float. So yeah, users might get these nans and have to figure out why they are happening and what they should do about it. Another obvious downside of this approach is that it is not backwards compatible. Likely someone somewhere would get bit by this change. But with breaking changes it's always a cost benefit analysis. Yes, it might hurt someone, but if you never allow breaking changes you might be stuck with imperfect/flawed design choices forever. Also.. I think the x_zero=x-x behaving differently from zero=ufloat(0, 0) repaints the story from "breaking feature change" to "removing a feature that was bug-prone).

@newville
Copy link
Member

newville commented Jan 2, 2025

@jagerber48

That all sounds reasonable. Any of those is OK with me.
I totally get that 3 could be a lot of ugliness -- it would be my least favorite option.

To be clear, though, for options 1 and 2, I meant only that creating a UFloat with the function ufloat()

x = ufloat(val, std_dev=0)

could return a float, oir raises TypeError. I don't think that would cause downstream trouble or interfere with downstream uses. I would be OK with either option, as

x = ufloat(7, 0)

just makes no sense. Either "cast to the closest sensible object" or reply with "That ain't no ufloat!".

If an ndarray of ufloats values has some value of std_dev=0, then NaNs and Infs are certainly bound to happen.
I'm OK with calling arrays of UFloats "advanced usage", so let that case be the Uuer's problem.

For your use case, you say "In these cases I want to treat the values as having zero uncertainty.". I have a hard time understanding what that could mean, other than
a) a float.
b) an object designed to wreak havoc as soon as possible. ;)

But, hey, go for it!

@jagerber48
Copy link
Contributor Author

jagerber48 commented Jan 2, 2025

For your use case, you say "In these cases I want to treat the values as having zero uncertainty.". I have a hard time understanding what that could mean, other than
a) a float.
b) an object designed to wreak havoc as soon as possible. ;)

But, hey, go for it!

It's true that it's easy for my use case and others to "wreak havoc". If uncertainties gave me a TypeError if I tried to call ufloat with std_dev=0 I could do a little bit of redesign to use a float in this case instead. I'm not sure how extensive changes would have to be on my end. It's probably the more proper thing to do so I guess that's an argument for 1 or 2.

If we agree that

a. the behavior I've described in 3 pertaining to ufloat(0, 0) vs. x_zero = x-x with x=ufloat(1, 0.1) IS a bug AND
b. that bug can't be resolved without spoiling the lazy linear combination expansion algorithm performance

then I would say that 4 is the least invasive resolution to the bug. 4 changes behavior on a few already strange edge cases whereas 1 and 2 are both pretty major changes to the API, e.g. changing return type or raising an exception on previously valid input.

edit: If there is a resolution to this bug that allows us preserve option 3 (i.e. the status quo) while ALSO maintaining the lazy linear combination expansion algorithm performance, I would want to know about that. I can think more about if it would be possible, but I'm a bit doubtful based on my current understanding of the algorithm. Basically, as far as my understanding goes, it is critical that std_dev is NOT evaluated for intermediate results or performance will drop from O(N) to O(N^2).

edit2: And I'll repeat: Currently only a few (2 or 4 total) tests relating to pow or umath.pow break if I remove the special handling for std_dev=0. I'm sure I could contrive more tests that we're currently missing that would also fail, but, at present, it would only be few tests that need to change.

@newville
Copy link
Member

newville commented Jan 2, 2025

@jagerber48 @wshanks I'm OK with option 4.

I might say that 1 and 2 are sort of API changes, but also that std_dev=0 is sort of invalid. It would be OK to call these options "clarifying the API". We don't need to promise that std_dev=0 is handled gracefully or with infinite backward compatibility.

And: option 4 only is OK too.

@jagerber48
Copy link
Contributor Author

@newville thanks for all the feedback. I think I still lean towards 4. One option is to do 4 but give a warning on std_dev=0. That said 1 and 2 are tempting because they do clarify the API and make it smaller so I'm not personally ready to make the choice to reject them yet. I have nothing else to add at this time. Unless others post I'll think on these options more for a bit before I say anything else or move forward.

@jagerber48
Copy link
Contributor Author

Ok, I've thought more about it and have an important result. 1 and 2 fall prey to the same problem 3 does.

If we go with 1 (std_dev = 0 returns a float) then if we do x=ufloat(1, 0.1) then y=x-x we should have that y is a float because y.std_dev==0. But we can't calculate std_dev mid calculation to know this result without spoiling the lazy expansion algorithm. Likewise if std_dev=0 raise a TypeError, we can't do that on y=x-x because we can't evaluate std_dev. Actually, early on I had a check for std_dev in the uncertainty propagation code to help pass exactly these tests, but, it killed the performance. That was one of the major motivations for #275.

Note y=x-x could be special cased. But what about y=x-x+x-x+x-x+x-x?

So for now I'm going to proceed with option 4 because that is the only one I see that can actually solve the issue being raised.

@newville
Copy link
Member

newville commented Jan 3, 2025

@jagerber48 Sorry, I cannot tell if you are being serious or not. If calling the function ufloat(8.0, 0) returns the FLOAT 8.0, then none of what you say would apply.

Again, I mean precisely and only that calling the function ufloat(val, std_dev=0) returns a float. Nothing about the internals of uncertainties is affected because that value is not a UFloat it is a Float. I've said this twice and am sort of surprised at your repeated misunderstanding.

If we go with 1 (std_dev = 0 returns a float) then if we do x=ufloat(1, 0.1) then y=x-x we should have that y is a float because y.std_dev==0.

Nope. Not at all. Completey WRONG. That is not what option 1 is. Option 1 and Option 2 are about (and only about, and exactly about, and precisely about) the function "ufloat()". Period, end of story, no more to it.

Your y is a UFloat with std_dev=0. That is not disallowed by Option 1 or 2. Havoc may ensure, but that falls under Option 4.

I am done with this discussion. I would like to be able to work on this project, but you have a very strong tendency to misunderstand what I say and put words in my mouth. Again, I am taking a break from this project.

@wshanks
Copy link
Collaborator

wshanks commented Jan 3, 2025

The discussion has moved on but to reply to the question earlier:

Are you suggesting that it would be more correct for pow(0+/-0, 0.1+/-0.03) to be 0+/-0 than it is for it to be 0+/-nan? For example pow(0+/-0.1, 0.1+/-0.03) gives 0+/-nan without fuss.

Yes, 0+/-0 seems more correct here to me because within a standard deviation 0 is being raised to a positive power so the result should still be 0. However, I don't feel too attached to preserving this behavior. As has been mentioned in the discussion of option 4, we could just decide that the behavior of UFloats with 0 standard deviation is done with some compromises for performance and may result in NaN in some cases.

Based on the recent comments, it seems like options 1 and 2 imply option 4 in a way. By this I mean that they try to steer the user away from using 0 standard deviation because havoc may ensue (option 4), but the user is still free to generate a UFloat with 0 standard deviation by other means (x-x or using the class constructor directly). I think there is one decision on whether to implement careful handling of 0 standard deviation internally (option 3 vs option 4), and then, if the implementation does not handle all edge cases (option 4), there is a choice about options 1 and 2 for pushing the user to avoid running into those edge cases by not allowing the cause of them in the ufloat helper function.

@andrewgsavage
Copy link
Contributor

  1. This is tempting. However, I personally have use cases for UFloat with std_dev=0. I have a database of experimentally measured calibration. Historically, just the measured values and not the uncertainties were recorded. But, going forward, I'm starting to also log the uncertainties. The legacy values without uncertainties may be used in the codebase. In these cases I want to treat the values as having zero uncertainty.

How well would using nan as the uncertainty instead of 0 work for your purposes?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants