-
Notifications
You must be signed in to change notification settings - Fork 13.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix an accuracy regression in f32::to_degrees #48622
Conversation
rust-lang#47919 regressed some test cases for `f32::to_degrees`, while improving others. This change satisfies all previous test cases and should be more accurate than both previous implementations. The `f32` to `f64` cast is lossless, `f64::to_degrees` should provide more accuracy than a native `f32::to_degrees`, and conversion back to `f32` will be as accurate a conversion as possible. It can be hard to reason about floating-point accuracy, but given that this passes all the tests, I feel confident it's an improvement.
@bors delegate=rkruppe |
✌️ @rkruppe can now approve this pull request |
It's good to fix the regression for the pi/6 case, but I am worried about this overall approach. It seems we are blindly stumbling through the landscape of possible implementations based on a few simple test cases. Accuracy across the wide range of not-nice inputs (i.e., not a multiple or fraction of the couple common constants) is at least as important as accuracy on the corner cases that people like to test. I'm coming to the conclusion that a principled approach is needed, otherwise we're apparently just as likely to regress precision for some inputs not yet in our test suite. Thus I would prefer us to either bite the bullet and implement a completely accurate algorithm as @huonw mentioned in the original issue, or leave the implementation as simple and predictable as possible. Here are my reasons for it: First there is the fact that we got blindsided by the regression for pi/6. This automatically makes me distrust any judgement coming from the same people at the same level of rigor. Second, I'm not a fan of this implementation strategy. Using higher precision internally for a single precision calculation can be useful, but it usually just papers over inaccuracies in the algorithm and thus doesn't help Third, I have doubts about the claim that this is always at least as accurate as an f32 multiply. The cast back to f32 at the end is correctly rounded in itself, yes, but it's still a rounding step. If the result of I don't have an example at hand, but on the contrary the test cases here are very minimal as well, so we're starved for empirical evidence either way. Note that the test cases that result in a nice round number like Honestly, at this point convincing me would take trying out all ~ 4 billion [1] Note that this probably requires computing the correctly rounded result, which makes this potentially challenging. |
I agree with everything you said; an exhaustive test does seem like the most convincing demonstration that we have a better implementation.
Yeah, I didn't give this enough weight as I perhaps ought to have. If we can measure the accuracy, and using double-precision was more accurate, then it's a separate decision to be made (though it's not even clear that's the case yet). I don't have time to write an exhaustive test right now, but I'll try to get around to it at some point if no-one else wants to give it a shot. I'm happy to either leave the pull request open, or close it for now. |
Since the regression for pi/6 is already in beta and this PR is now delayed indefinitely, perhaps it's best to revert the previous PR now and backport to beta? |
@rkruppe: Okay, I'm retracting my earlier comment. I don't think the current behaviour is incorrect. I think you're right when you suggested that the Take
In (true) decimal, this corresponds to:
And |
FWIW, I'm OK with closing #48617 as expected behavior, as long as we're sure that's expected and good, even though it seemed more accurate before. (for a few inputs, perhaps not in general)
This doesn't prove accuracy -- it just proves that the compiler can do the math you gave it. You're giving the compiler a particular pre-rounded constant. There are a few other ways we could re-frame this calculation of |
Yeah, you're right, I'm not thinking straight at all at the moment.
Intuitively (though I don't want to try to put any weight behind that, as it doesn't have a good track record), using a more accurate constant should result overall in more accurate results: the previous constant (implemented using the division) was demonstrably less accurate than the new constant. This makes me feel that, considering every input, the new method should still be more accurate. However, given the quirks of floating-point, it might have happened, by chance, that the old version was actually a better constant to use in that context. I want to rectify my mistakes and actually resolve this. I'll try to set aside some time to test this; if I don't manage, then we can revert the change if no new evidence emerges. It's probably better to remain consistent if we can't prove we're not regressing. Sorry for all my confusion! |
Rounding errors can (and often do, with round-to-nearest) cancel out. |
Here's my analysis: For a range of // (a)
x * (180.0f32 / f32::consts::PI)
// (b)
const PIS_IN_180: f32 = 57.2957795130823208767981548141051703_f32;
x * PIS_IN_180
// (c)
const PIS_IN_180: f64 = 57.2957795130823208767981548141051703_f64;
((x as f64) * PIS_IN_180) as f32 I then compared (a), (b) and (c) to (m) to see for how many inputs their outputs differed for the given range. Here are my results (each cell contains the number of discrepancies):
I could go on, but Mathematica's speed is really a limiting factor here. The discrepancies for (a) and (b) do differ — hence the loss of accuracy on some inputs, as @cuviper showed. However, (b) clearly seems to be superior to (a) (even if it unfortunately fails for some "well-known" identities). Interestingly, the upcasting to [1] This does rely on the correctness of the precision-handling functionality of Mathematica, but I couldn't find any literature on 0.5 ULP correct radians-to-degrees conversion implementations, so I think this is the most confident we're going to come without an (even greater) inordinate amount of effort for such a small feature. |
I just wanted to say that when I use |
@varkor Thanks for collecting data. Some questions:
|
@rkruppe: I also checked to make sure none of the implementations were more than 1 ULP off. Fortunately none of them exceeded 1 ULP for the inputs I tested. Oh, I should have clarified: I meant the bit patterns: |
I'm going to close this PR. If we decide that a different method is desirable (e.g. doing a division instead of a multiplication), then we can open another PR. |
#47919 regressed some test cases for
f32::to_degrees
, while improving others. This change satisfies all previous test cases and should be more accurate than both previous implementations.The
f32
tof64
cast is lossless,f64::to_degrees
should provide more accuracy than a nativef32::to_degrees
, and conversion back tof32
will be as accurate a conversion as possible.Fixes #48617.
r? @rkruppe