-
-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider tightening use of floating-point numbers using custom math types #193
Comments
I've started looking into this. So far, I'm just experimenting, and not yet committed to seeing it through. |
Another more or less minor thing with floating point numbers are denormalized floats. These are commonly very very small floats that are represented differently, and they can cause a slow down in processing. This is of course more an issue for the DSP code, as the performance margins there are much much more constrained by the real time aspect of computing sound samples. |
Interesting, thanks for the insight, @WeirdConstructor! I don't think I've come across this before (or I have, and forgot all about it, which is also quite possible). I wonder, if this will become relevant for Fornjot too, and what could be done about it. |
Having read up about it a bit, those kinds of numbers could could probably be detected when constructing the hypothetical |
I've been working on this for the last few days. I think I just got done with all the preparation, and am now ready to realize the actual benefits promised here. I also ran out of time for today though, so that will have to come later. I don't feel comfortable with all the rote code that I had to add to get this far, but I guess that's just what's required. At least the ugliness is mostly confined to the Long-term, it's probably better to try and improve the upstream crates, namely nalgebra and Decorum, so we can get rid of this stuff again. I don't think I'll have time to do this in the foreseeable future. If anyone reading this is interested in this, you should look into that! It's a change that the whole ecosystem could benefit from. |
Floating-point numbers are a pain. Not only do they have accuracy issues (which are not within the scope of this issue), they also can have weird values, like NaN (Not a Number) or positive/negative infinity.
NaN specifically causes two kinds of problems:
Eq
orHash
, making it impossible to put them intoHashSet
, or as keys intoHashMap
. The same goes for similar sets/maps.Especially that second point is turning into a problem. It can be worked around, but it makes things more difficult than they should be. I've used Decorum for such workarounds before, but that's not ideal. It works, for example to convert a
Point<3>
into a[R64; 3]
, to put into aHashSet
. But the necessity of such workarounds always puts up hurdles in front of the correct and obvious solution. It's never a big deal, but in aggregate, it's starting to weight us down.I think it's time to address this problem once and for all, by switching to math types that implement
Eq
/Hash
, and use those everywhere. Here's my rough plan:math
module to represent scalar values (could be calledScalar
). It would wrap anf64
and provide easy conversion from/tof64
.f64
,Scalar
would check whether thef64
value is all proper and regular, and if not, panic right away. This would expose any bugs right where they happen.math::Point
andmath::Vector
would become structs that wrap an array ofScalar
and provide easy conversion from/to its nalgebra equivalents. Like scalar, the "from" conversions would be panicky, to expose bugs right away.impl Into<math::Point<3>>
. That would still allow us to do a quickpoint![1., 2. 3.]
in test code.Initially, that could be all there is. The easy conversions would always provide access to any nalgebra operations, and enable a piecemeal migration. Over time, we could add more conveniences, like implementing the
std::ops
traits directly. These implementations would defer to nalgebra, of course.One concern here is how performance would be affected, as this plan would add panicky checks to lots of places. I suspect that we can optimize performance-sensitive code by working only with nalgebra/
f64
, and only doing the conversion at the end. But I do think the added robustness and ease of development is worth some performance cost.Here are some alternatives to the above plan that I've dismissed:
Point
/Vector
types. This is possible, but would cost us the easyFrom
/Into
conversions we could otherwise have.R64
, instead of adding a customScalar
type. In addition to having the same disadvantage as the previous item, I've tried this in the past, and ran into problems.R64
doesn't implement certain nalgebra traits that are required for some operations (I don't recall details). This could be overcome, but would require upstream work.I'm thinking of working on this soon, but I'm not sure. On the one hand, I was hoping not to get distracted from #97, but on the other hand, those workaround I need to make are distracting too. Very tempting to just get this over with now.
The text was updated successfully, but these errors were encountered: