Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Float min/max/clamp follow IEEE754-2008 (and not IEEE754-2019) WRT negative zeros #83984

Open
thomcc opened this issue Apr 7, 2021 · 47 comments · Fixed by #136296
Open

Float min/max/clamp follow IEEE754-2008 (and not IEEE754-2019) WRT negative zeros #83984

thomcc opened this issue Apr 7, 2021 · 47 comments · Fixed by #136296
Labels
A-floating-point Area: Floating point numbers and arithmetic C-bug Category: This is a bug.

Comments

@thomcc
Copy link
Member

thomcc commented Apr 7, 2021

In a blog post (shameless plug) I wrote on clamping between 0.0 and 1.0, I noticed that we don't treat -0.0 as less than +0.0. Instead, we return whichever is the first argument:

>> f32::min(0.0, -0.0)
0.0
>> f32::min(-0.0, 0.0)
-0.0
>> f32::max(0.0, -0.0)
0.0
>> f32::max(-0.0, 0.0)
-0.0

That behavior is kinda incoherent, probably unintentional, and seems highly unlikely that anybody is relying on it, so I suspect we're free to fix it.

While IEEE-754 defines two families of min/max functions (differing on NaN handling), all of them are required to treat -0.0 as less than +0.0. We should fix our version to do this ¹.

This problem is present in {float}::clamp too, which ideally would produce +0.0 for f32::clamp(-0.0, +0.0, _). (However, note that clamp uses a different² version of min/max than max/min, so fixing it here won't automatically fix it for clamp).

Excerpt from IEEE754 2019 section 9.6 "Minimum and maximum operations" (emphasis mine)

  • maximum(x, y) is x if x > y, y if y > x and a quiet NaN if either operand is a NaN, according to 6.2. For this operation, +0 compares greater than −0. ...

  • maximumNumber(x, y) is x if x > y, y if y > x, and the number if one operand is a number and the other is a NaN. For this operation, +0 compares greater than −0. ...

  • minimum(x, y) is x if x < y, y if y < x, and a quiet NaN if either operand is a NaN, according to 6.2. For this operation, −0 compares less than +0. ...

  • minimumNumber(x, y) is x if x < y, y if y < x, and the number if one operand is a number and the other is a NaN. For this operation, −0 compares less than +0. ...

CC @workingjubilee


¹ Alternatively, since these functions only "should" be provided, we could say that we just don't provide equivalents to those functions and that ours are different functions. If we do this, we should have a very good reason IMO, and should document this as well.

² Note that for reasons³ our {float}::clamp functions are documented as propagating NaNs, but {float}::{min, max} are not. This means this does have to be fixed in multiple places, although, it's coherent under IEEE754 if we assume

  • {float}::{min, max} use minimumNumber and maximumNumber respectively
  • {float}::clamp uses minimum and maximum

³ Nobody asked, but personally, I'd rather clamp be consistent with min/max, which for this has the benefit of guaranteeing that the result is always in the provided bound, which is quite desirable sometimes. I know it was probably deliberate, and I do get why, and know it can't be changed, but I'm still going to use this footnote-within-a-footnote on an obscure floating point bug report to gripe about it.

@eggyal
Copy link
Contributor

eggyal commented Apr 7, 2021

@rustbot label: +A-floating-point +C-bug

@rustbot rustbot added A-floating-point Area: Floating point numbers and arithmetic C-bug Category: This is a bug. labels Apr 7, 2021
@est31
Copy link
Member

est31 commented Apr 7, 2021

Rust just calls the llvm.minnum.f64 or llvm.minnum.f32 intrinsics. Quoting the language reference:

If either operand is a NaN, returns the other non-NaN operand. Returns NaN only if both operands are NaN. The returned NaN is always quiet. If the operands compare equal, returns a value that compares equal to both operands. This means that fmin(+/-0.0, +/-0.0) could return either -0.0 or 0.0.

There are also llvm.minimum.f64 and llvm.minimum.f32 intrinsics (needed to squint the first time, it's minnum vs minimum). Quoting its docs:

If either operand is a NaN, returns NaN. Otherwise returns the lesser of the two arguments. -0.0 is considered to be less than +0.0 for this intrinsic. Note that these are the semantics specified in the draft of IEEE 754-2018.

Maybe rustc should be changed to use minimum instead of minnum?

@eggyal
Copy link
Contributor

eggyal commented Apr 7, 2021

Maybe rustc should be changed to use minimum instead of minnum?

The docs for f32::min and f64::min both state:

If one of the arguments is NaN, then the other argument is returned.

Which this proposed change would break.

@thomcc
Copy link
Member Author

thomcc commented Apr 7, 2021

LLVM's minnum (at the link above) also says

Follows the IEEE-754 semantics for minNum, except for handling of signaling NaNs. This match’s the behavior of libm’s fmin.

which is incorrect for -0.0, what this bug is. So maybe this is an LLVM bug.

@est31
Copy link
Member

est31 commented Apr 7, 2021

More reading:

The minimum intrinsic seems to exist since LLVM 8.0, while the minimum LLVM that rustc supports is 10, so it should be safe to use from that point of view.

Which this proposed change would break.

Hmm that's bad. :/.

@thomcc
Copy link
Member Author

thomcc commented Apr 7, 2021

I think this is a bug in llvm.minnum, although it's unclear which of

Follows the IEEE-754 semantics for minNum, except for handling of signaling NaNs

and

This means that fmin(+/-0.0, +/-0.0) could return either -0.0 or 0.0.

is more important (since those two parts of the docs are in contradiction). If there's a performance cost to the former, I could imagine LLVM would rather keep it as-is.

That said, for them it wouldn't be breaking to fix the bug so that they're in compliance with IEEE minNum, since they don't specify the result.

@est31
Copy link
Member

est31 commented Apr 7, 2021

For Rust, I don't know how much the documentation mentioning this behaviour prevents a change. It might just be that it only means that the documentation needs to be changed as well, nothing more.

The sentence seems to have been introduced by commit f44d287 .

@thomcc
Copy link
Member Author

thomcc commented Apr 7, 2021

I filed an LLVM bug: https://bugs.llvm.org/show_bug.cgi?id=49886

And yeah, I just busted out my old copy of IEEE 754-2008 and can confirm that it doesn't specify the sign of -0.0. I mentioned maybe a new intrinsic could be added instead.

For Rust, I don't know how much the documentation mentioning this behaviour prevents a change. It might just be that it only means that the documentation needs to be changed as well, nothing more.

IMO it would be a breaking change (even if it weren't documented, but especially given that it is). Faking Ord for floats by a.partial_cmp(b).unwrap() isn't unheard of, and so changing things to introduce way more NaNs could cause working code to break. (Also, I could easily imagine using the documentation there to justify not checking for NaN in the result of f32::min/f32::max, and would be upset if it changed out from under me)

Also way more people will notice and care about the handling of NaNs than -0.0 also.


All that said, we can fix this for f32::clamp by using llvm.minimum and llvm.maximum instead, which isn't a breaking change (it propagates NaN, as mentioned).

@nagisa
Copy link
Member

nagisa commented Apr 10, 2021

Do hardware implementations support this behaviour? Some random documentation for minsd of x86_64 says:

If the values being compared are both 0.0s (of either sign), the value in the second source operand is returned.

thus not supporting the behaviour desired here. AArch's FMIN appears to handle the 0.0 the "right" way.


LLVM's x86 backend is unable to select machine code for the llvm.minimum intrinsic, so its not usable for generic implementations either way.

@eggyal
Copy link
Contributor

eggyal commented Apr 10, 2021

I looked at what cg_clif is doing:

minnumf32, (v a, v b) {
let val = fx.bcx.ins().fmin(a, b);
let val = CValue::by_val(val, fx.layout_of(fx.tcx.types.f32));
ret.write_cvalue(fx, val);
};
minnumf64, (v a, v b) {
let val = fx.bcx.ins().fmin(a, b);
let val = CValue::by_val(val, fx.layout_of(fx.tcx.types.f64));
ret.write_cvalue(fx, val);
};

Taken together with Cranelift's documentation for its fmin and fmin_pseudo instructions, it would appear that cg_clif already takes the IEEE754 minimum approach (albeit there's an open issue that could see it switch to using minimumNumber instead)—and I don't think there's (currently) any Cranelift instruction available by which it could adopt the existing cg_llvm semantics.

To the extent that has any bearing on this discussion, I guess it just pushes slightly toward the same semantics being adopted in cg_llvm too.

IMO it would be a breaking change (even if it weren't documented, but especially given that it is). Faking Ord for floats by a.partial_cmp(b).unwrap() isn't unheard of, and so changing things to introduce way more NaNs could cause working code to break. (Also, I could easily imagine using the documentation there to justify not checking for NaN in the result of f32::min/f32::max, and would be upset if it changed out from under me)

I agree, but I wonder whether there's any way something like a Crater run could provide an indication of the extent to which these semantics are actually being relied upon (I can't see how—though it could at very least determine how many crates actually use the existing min methods). If the semantics were to change, it may be sufficient to lint uses of min with a warning to that effect (although that would mean either forever having to suppress the lint if so desired, or at some point it no longer warning by default).

If, as I suspect, the semantics of the existing methods cannot be changed, perhaps alternative methods might be added instead (eg minimum or similar), with a view to eventually deprecating the existing min methods in a future edition?

@eggyal
Copy link
Contributor

eggyal commented Apr 10, 2021

Reproducing here @sunfishcode's helpful summary in https://github.com/bjorn3/rustc_codegen_cranelift/issues/1049#issuecomment-817067277:

Rust+llvm are following the family of functions known in IEEE 754-2008 as minNum which prefer a number over a NaN and don't define what happens between -0.0 and +0.0 (and has surprising behavior on sNaN).

Cranelift's fmin follows IEEE 754-2019's new minimum and WebAssembly's min, which prefer a NaN over a number and treat -0.0 as less than +0.0 (and have unsurprising behavior on sNaN).

IEEE 754-2019 also defines minimumNumber, which prefers a number over a NaN and treats -0.0 as less than +0.0 (and has unsurprising behavior on sNaN). WebAssembly's MVP was designed before IEEE 754-2019's direction on this issue was known, so it didn't originally add operators for minimumNumber and maximumNumber. Now that IEEE 754-2019 is published, it's a reasonable assumption that WebAssembly should eventually add operators for these.

@thomcc
Copy link
Member Author

thomcc commented Apr 10, 2021

I agree, but I wonder whether there's any way something like a Crater run could provide an indication of the extent to which these semantics are actually being relied upon

I'm certain I've implicitly relied on this in ways that wouldn't be tested on crater. A lot of the times you don't bother testing with NaN. Often, even if you do, it can be tricky to get NaN all the way into the right place in the code, but you want it handled a certain way if it does get there. Also, floating point stuff is common in gui and graphics code, some of which isn't open source, and the stuff that is doesn't run on crater (needs a GUI).

Again, I strongly feel that the documentation there is a promise that these are the semantics of the function, and that you can rely on them. Just because NaN is a weird value, and floats are less precise than the integers, doesn't mean the documented behavior of a function like this is allowed to change on that value. It's hard enough to handle floating point edge cases robustly.

If it wasn't floats and NaN, would you consider changing the behavior? Say of i32::signum on zero (which is an example of a case where the result isn't exactly "obvious", since the input isn't precisely part of the obvious input range) — the answer is (hopefully) of course not, the function promises that it behaves that way on the function, so it can't change.

Sorry, I might have a bit of a chip on my shoulder about this stuff, but I think it's a bit ridiculous to even consider, given that our IEEE compliance around NaN isn't even in question — we're compliant so long as this function is minimumNumber not minimum. It's the handling of negative zero where we're wrong, and we shouldn't break a behavioral promise the API makes for NaN (which people will notice) just to bring -0 (which very few will ever notice) into line.

And I'd only want to deprecate or lint if we can't bring it into line with minimumNumber, and even then, it'd be a very touch decision since these functions are very highly used, -0 is such an edge case, and there would be a lot of churn for such a small thing. A new function that implements minimum as speced by IEEE754 2019 would probably be fine though, if a bit confusing, given that we're getting total comparison stuff soon too.

@thomcc
Copy link
Member Author

thomcc commented Apr 10, 2021

Anyway, changing the behavior on NaN could lead to UB: Something like x.max(0.0).min(255.0).to_int_unchecked::<u8>() is sound today on NaN input with the current rules, and becomes UB if max/min return NaN when given NaN input.

(Note: I don't think this example is that implausible)

@nikic
Copy link
Contributor

nikic commented Apr 10, 2021

Just so we're absolutely clear on this: The current behavior regarding negative zero in both Rust and LLVM is very much intentional, and should very much not be "fixed".

@thomcc
Copy link
Member Author

thomcc commented Apr 10, 2021

The current behavior around negative zero is that it's unspecified. It follows an old version of the spec, but that function was replaced with one that clarifies the behavior for negative zero.

For LLVM, I have way weaker opinions, but if anything I think the "fix" is probably a new pair of intrinsics that match minimumNumber/maximumNumber. The current llvm.minnum/llvm.maxnum would stay as-is.

For Rust, I think it should be updated to be compliant with the new IEEE754 spec, which has all minimum and maximum functions treat -0.0 as less than 0.0, and removed the min/max functions where they're treated as equal. Do you have a reason that the current behavior around that should be kept?

@thomcc
Copy link
Member Author

thomcc commented Apr 10, 2021

The current behavior {f32,f64}::{min,max}(-0.0, 0.0) is to just return the first argument, which seems incoherent, hard to rely on, likely platform dependent, and entirely undocumented. So I think we're free to change it.

If it turns out it's a big performance hit to handle it as specified, that could make a difference in my opinion though, perhaps.

If we don't change it, we should document that it's based on minNum from IEEE754-2008 (and not minimumNumber from IEEE754-2019), to avoid future confusion.

I also think there's no reason not to change clamp to just use llvm.minimum and llvm.maximum as-is, since it's compatible with their NaN handling, and that would fix this bug for it.

@nikic
Copy link
Contributor

nikic commented Apr 10, 2021

For Rust, I think it should be updated to be compliant with the new IEEE754 spec, which has all minimum and maximum functions treat -0.0 as less than 0.0, and removed the min/max functions where they're treated as equal. Do you have a reason that the current behavior around that should be kept?

The behavior exists for performance reasons. x86 does not support IEEE754 min/max operations, so all special cases need to be implemented explicitly. When targeting newer x86, NaN is handled through an additional cmpunord+blendv. On older targets (and if non-NaN cannot be proven) it's significantly more expensive. Handling negative zero adds more cost on top of that.

In addition to that, fmin/fmax in C are spec'd to follow IEEE754-2008 semantics, in that handling of negative zero is explicitly specified as optional for implementation feasibility reasons. This means that for soft-float targets, if the operation gets libcall-legalized, we do not have a guarantee that the invoked libcall will have special treatment of negative zero.

@nikic nikic changed the title Float min/max/clamp don't follow IEEE754 WRT negative zeros Float min/max/clamp follow IEEE754-2008 WRT negative zeros Apr 10, 2021
@thomcc thomcc changed the title Float min/max/clamp follow IEEE754-2008 WRT negative zeros Float min/max/clamp follow IEEE754-2008 (and not IEEE754-2019) WRT negative zeros Apr 10, 2021
@thomcc
Copy link
Member Author

thomcc commented Apr 10, 2021

Hmm, it's hard for me to say. Given that IEEE 754 2019 defines several min/max functions, but they all have the same behavior for -0.0 (that it's less than +0.0), that feels like a strong indication that that behavior is to be preferred.

I also feel like these functions are typically outside the hot path, and that in practice the branch that's needed to check for -0.0 will be correctly predicted essentially every time (assuming its a branch — and I guess it may not be). That said, the decision here would presumably be mirrored by core_simd, and I do feel that it's very plausible that a simd version of min/max could be in the hot path — I've written code where this is the case, and while the scalar fallback in that code didn't use f32::min/max (nor could it have, really, it needed to use an explicit comparison)...

And I guess there are probably cases I'm not thinking of where you need the minimum value in an array or something, and then f32::min would be in the hot path.

That said, I was surprised that we didn't follow IEEE754 2019, and someone else may be too in the future. I'd feel a bit better if there's an explicit decision here, since it's a decision to stick with an outdated version of these APIs for performance reasons...

@workingjubilee
Copy link
Member

Regarding min/max, it sounds like NaN handling requires additional instructions, regardless of which behavior is desired on x86, and so that does not sound like a particularly strong performance concern.
It does sound like -0.0 vs. 0.0 is hypothetically a performance concern.
Rust has much bigger issues though re: performance concerns and floats.

@scottmcm
Copy link
Member

I know it was probably deliberate

It was definitely deliberate, because there's no good choice for what non-NAN value to return for a NAN input.

@kornelski
Copy link
Contributor

kornelski commented Apr 12, 2021

I also feel like these functions are typically outside the hot path

In image processing these functions are very often in the hot path, and -0 never happens.

Sadly, with float functions there's always this tension between prioritizing accuracy and performance. I know for some scientific cases it's necessary to handle every weird edge case up to ULP. OTOH there are lots of cases where floats are just used for banal imprecise arithmetic. Not because floats are the right tool for the job, but only because otherwise half of the CPU would sit unused.

@workingjubilee
Copy link
Member

Most modern application processor architectures I have seen the pipeline diagrams for actually have their pipelines combined for floating point operations and SIMD, because they have combined floating point and SIMD registers, so integral SIMD math would use the same pipelines, and may actually outstrip floating point performance. Floating point performance on its own does matter, of course, which is why I think having a highly consistent semantics is not merely some academic or scientific mathematical concern, as it enables the fastest evaluation of all: the one taking zero time, because it was already calculated.

So while I am sympathetic to the performance concerns, I do not believe we should ignore the possibilities for alternatives, other gains we could be getting, or places we can make up for any losses we might encounter as a result of tightening our specification.

That said, I am not in any particular rush to a conclusion, here. I expect this issue, like many floating point issues in this repository, will remain open for some time before it resolves one way or another. I am merely noting that it is possible there is a solution that may satisfy both concerns.

@bjorn3
Copy link
Member

bjorn3 commented Jul 29, 2021

Taken together with Cranelift's documentation for its fmin and fmin_pseudo instructions, it would appear that cg_clif already takes the IEEE754 minimum approach (albeit there's an open issue that could see it switch to using minimumNumber instead)—and I don't think there's (currently) any Cranelift instruction available by which it could adopt the existing cg_llvm semantics.

I have now adopted cg_llvm's semantics in bjorn3/rustc_codegen_cranelift@c6564f8. I would slightly prefer if either the semantics of Cranelift's fmin/fmax or fmin_pseudo/fmax_pseudo instructions would be chosen, but for now this fixes some libcore and stdsimd tests. In the later case by ensuring that the scalar min/max operations match the vector ones which already had the same semantics as cg_llvm due to explicitly using comparisons and lane selection instead of dedicated intrinsics.

@scottmcm
Copy link
Member

minimum/maximum is #91079, I think?

@est31
Copy link
Member

est31 commented Feb 21, 2023

@scottmcm yes, good point. Despite having commented in #91008, I've not made the connection to this issue. It's interesting that LLVM errors out for the intrinsics. Maybe this should be closed in favour of #91079?

@RalfJung
Copy link
Member

We do have minimum/maximum next to min/max now, so changing min/max NaN behavior is anyway completely off the table I assume. These are different operations in IEEE and we expose them as different operations.

But the discussion above talked a lot about NaNs when the issue really is about -0. The new operations are explicitly documented as treating -0 as less than +0 and I assume the implementation matches that. What remains is the old operations, where I don't think anything has changed?

@7086cmd
Copy link

7086cmd commented Jan 30, 2025

Interestingly, it may behave differently in arm and x64, which may be due to this.

The following code snippets:

fn main() {
    let val = vec![-0f64, 0f64];
    let min = val.iter().copied().fold(f64::INFINITY, f64::min);
    let max = val.iter().copied().fold(f64::NEG_INFINITY, f64::max);
    println!("{val:?} with {min} and {max}");
}

Its result on x64 (Ubuntu) is:

[-0.0, 0.0] with -0 and -0

However, on arm (macOS), it is:

[-0.0, 0.0] with -0 and 0

@bjorn3
Copy link
Member

bjorn3 commented Jan 30, 2025

Are those both on x86_64 or is macOS on arm64? If the latter, this is more likely a cpu architecture difference than an OS difference.

@7086cmd
Copy link

7086cmd commented Jan 30, 2025

Are those both on x86_64 or is macOS on arm64? If the latter, this is more likely a cpu architecture difference than an OS difference.

It seems like the architecture difference since I'm using the arm mac.

@vinc17fr
Copy link

Note that for x86, https://www.felixcloutier.com/x86/maxpd says for MAXPD: "If the values being compared are both 0.0s (of either sign), the value in the second operand (source operand) is returned." So, this is not commutative. How does fold() behave in such a case?

@7086cmd
Copy link

7086cmd commented Jan 30, 2025

Note that for x86, https://www.felixcloutier.com/x86/maxpd says for MAXPD: "If the values being compared are both 0.0s (of either sign), the value in the second operand (source operand) is returned." So, this is not commutative. How does fold() behave in such a case?

I was guessing that it's not because of "fold." Instead, when I manually implement the min and max function, the problem can be overcame.

@RalfJung
Copy link
Member

LLVM documents the underlying operation we use here as

If the operands compare equal, returns a value that compares equal to both operands. This means that fmin(+/-0.0, +/-0.0) could return either -0.0 or 0.0.

This means both the x86 and the ARM behavior would be okay -- in fact, the result has to be considered non-deterministic. We should probably explicitly document that? Or is this an LLVM bug? What does the IEEE754-2008 spec say on this topic? Currently, we just say "This follows the IEEE 754-2008 semantics for minNum".

@RalfJung
Copy link
Member

The standard says

minNum(x, y) is the canonicalized number x if x < y, y if y < x, the canonicalized number if one
operand is a number and the other a quiet NaN. Otherwise it is either x or y, canonicalized (this
means results might differ among implementations).

So indeed, returning different results on different architectures is permitted. I think this is then primarily a documentation issue.

@RalfJung
Copy link
Member

#136296 is my proposal for the docs update that should then also close this issue.

@tgross35
Copy link
Contributor

I agree with Ralf that the best thing we can really do here is document the existing behavior. minimumNumber and maximumNumber semantics should be preferred if it does not come with a performance hit, but falling back to soft ops in order to maintain -0 < +0 wouldn't be ideal (IOW, our min and max should try to be minimumNumber and maximumNumber, but allow minNum/maxNum semantics). I assume this is what LLVM is doing for us already.

The new API (#91079) will provide 754-2019 minimum and maximum so we have a version that does order zeroes. Different NaN handling does leave us without a guaranteed minimumNumber and maximumNumber, for the time being at least. Maybe we should revisit this after minimum and maximum make it to stable?

Note that C23 tries to encourage fmin/fmax (our min/ max) to order zeroes:

Ideally, fmax would be sensitive to the sign of zero, for example fmax(−0.0, +0.0) would return +0; however, implementation in software might be impractical

As well as mention that they shouldn't be used anymore (in favor of the version that we do not have):

The functions fmin and fmax have been superseded by fminimum_num and fmaximum_num. The fmin and fmax functions provide the minNum and maxNum operations specified in (the superseded) ISO/IEC/IEEE 60559:2011.

@tgross35
Copy link
Contributor

For the above arch difference, arm emits a single fminnm, the docs only say it follows IEEE 754-2008 semantics but it does seem to maintain ordering of zeroes. Baseline x86 uses minsd and AVX2 use vminsd, which both say

If the values being compared are both 0.0s (of either sign), the value in the second source operand is returned.

https://rust.godbolt.org/z/dKj3hGsE7

@bors bors closed this as completed in 08dc8c9 Jan 31, 2025
rust-timer added a commit to rust-lang-ci/rust that referenced this issue Jan 31, 2025
Rollup merge of rust-lang#136296 - RalfJung:float-min-max, r=tgross35

float::min/max: mention the non-determinism around signed 0

Turns out this can actually produce different results on different machines [in practice](rust-lang#83984 (comment)); that seems worth documenting. I assume LLVM will happily const-fold these operations so so there could be different results for the same input even on the same machine, depending on whether things get const-folded or not.

`@nikic` I remember there was an LLVM soundness fix regarding scalar evolution for loops that had to recognize certain operations as non-deterministic... it seems to me that pass would also have to avoid predicting the result of `llvm.{min,max}num`, for the same reason?

r? `@tgross35`
Cc `@rust-lang/libs-api`

If this lands we should also make Miri non-deterministic here.

Fixes rust-lang#83984
@RalfJung
Copy link
Member

@tgross35 do you think we should say anything about clamp? The way it is implemented now, it is actually deterministic (it doesn't use min/max, but directly uses </>). Not sure if we want to guarantee that?

@tgross35
Copy link
Contributor

We can probably clarify the current behavior, but maybe it is worth asking libs-api if we should guarantee it? Docs do currently say:

Returns max if self is greater than max, and min if self is less than min

Which, if you read "greater than" and "less than" as compareSignalingGreater and compareSignalingLess, does mean that the sign of zero should be ignored. I doubt anyone is reading that much into it though.

maximum/minimum semantics do seem preferable (agreeing with Thom's top post) but likely falls into the same boat as F::{min,max} regarding performance implications. Our minimum and maximum don't use the LLVM intrinsics yet so this isn't exactly a fair comparison https://rust.godbolt.org/z/Yevo5Y85v.

I'll reopen since clamp isn't resolved yet.

@tgross35 tgross35 reopened this Jan 31, 2025
@RalfJung
Copy link
Member

RalfJung commented Jan 31, 2025

So specifically what are the interesting cases and what is the current behavior? Things like clamp(0, -0, 0), I assume? That would currently return -0.

Most of the discussion here is about min and max so it's probably better to open a new issue focusing on clamp, and nominating that for t-libs-api.

github-actions bot pushed a commit to rust-lang/miri that referenced this issue Feb 1, 2025
float::min/max: mention the non-determinism around signed 0

Turns out this can actually produce different results on different machines [in practice](rust-lang/rust#83984 (comment)); that seems worth documenting. I assume LLVM will happily const-fold these operations so so there could be different results for the same input even on the same machine, depending on whether things get const-folded or not.

`@nikic` I remember there was an LLVM soundness fix regarding scalar evolution for loops that had to recognize certain operations as non-deterministic... it seems to me that pass would also have to avoid predicting the result of `llvm.{min,max}num`, for the same reason?

r? `@tgross35`
Cc `@rust-lang/libs-api`

If this lands we should also make Miri non-deterministic here.

Fixes rust-lang/rust#83984
@RalfJung RalfJung reopened this Feb 2, 2025
@RalfJung
Copy link
Member

RalfJung commented Feb 2, 2025

Accidental close caused by subtree sync.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-floating-point Area: Floating point numbers and arithmetic C-bug Category: This is a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.