Watch out for NaNs

Several threads here have discussed how f64 doesn't implement Eq or Ord or Hash because of NaN values. This is indeed unpleasant.

I think of NaNs as a floating point arithmetic design mistake, similar to having null values by default in reference types, which Tony Hoare has called a "billion dollar mistake".

But here is another problem with NaNs. Consider this innocent-looking code:

println!("{}", f64::sqrt(1001.3 - 1001.0 - 0.3));

You would think it would print something in the vicinity of the correct answer, 0.0, maybe a bit off due to rounding errors.

What does it actually print? NaN.

That's because due to rounding errors it ends up taking a square root of a tiny negative number, and sqrt is defined to return NaN for negative numbers.

Therefore really, instead of f64::sqrt(x), you should always do f64::sqrt(x.max(0.0)), if you want to be robust to rounding errors.

I think it would be a lot better if sqrt was simply defined to be 0 for negative numbers. Unfortunately the IEEE-754 standard chose this to be NaN instead.

Similarly, f64::acos(1.00001) is NaN. You should really do f64::acos(x.clamp(-1.0, 1.0)) if you want to be robust.

Same thing for f64::asin, f64::atan, f64::log, etc. You really have to clamp the input to the correct range if you want them to work robustly in the presence of rounding errors.

I can think of only a few scenarios where there is no clear better alternative to NaN: 0 / 0, 0 * ∞, ∞ - ∞. It's not clear what they should return.

But, I think, it doesn't really matter very much what they return. If your code is robust to rounding errors, it has to be prepared for any answer to 0 / 0. If it's 0 / 0 precisely, you get NaN. But if it's 1e-300 / 0, you get infinity. If it's 1e-300 / 1e-300, you get 1.0. Etc. This is just a very unstable calculation.

So I think NaN might make sense for a few special cases like this, but not for other cases like sqrt(negative) or log(negative).

Since these special cases are only a few cases, and your code usually has to be prepared to get any answer from them anyway, it wouldn't do much harm to have a floating point arithmetic that just returns an arbitrary non-NaN answer in those cases as well (such as 0 or infinity), and get rid of NaNs altogether.

Also note that 1.0 / 0.0 = infinity is already quite arbitrary (it could be -infinity). You could argue that it's +0.0 and not -0.0, and 1.0 / (-0.0) = -infinity. But if we go that rabbit hole, then, 0.0 - 0.0 = +0.0 is an arbitrary sign choice, why not -0.0? So this business of dividing by zero is already quite arbitrary. So might as well make +0.0 / +0.0 = infinity and -0.0 / +0.0 = -infinity and it would be pretty consistent with all the other arbitrary choices about +0.0 and -0.0.

10 Likes

For atan? I don't think so.

3 Likes

Ah, right, my mistake. Forget atan.

atanh is better. atanh(1.00001) is NaN.

There's a wide spectrum of use-cases for floats. IEEE floats are very quirky, but they are very precisely specified, and some developers take that seriously, and do tweak their calculations up to the ULP.

There are OTOH uses in graphics, game development, and machine learning where none of the fancy precise features matter, and sometimes people would even use b16 if it was an option.

There have been multiple proposals for "fast floats", but even that falls apart when you look closely which guarantees exactly should be relaxed, because some people just don't want NaN, some want finite math, some want faster optimizations, but not where it breaks code that adds things in a loop, and so on.

So for a start, I don't think anything about f32/f64 can be changed to deviate far from IEEE. It would be breaking backwards compatibility, and breaking standards-compliance.

I would like to have a "fast sloppy float" type in Rust, but that needs to be a new type (but not a newtype, since these don't have literals, and user-defined literals are whole another rabbit hole :))

16 Likes

Strong disagree from me.

I do a lot of simulations mostly of electrical systems. IEEE 754 floating-point numbers are almost ideal for this purpose: they retain high precision over a very wide range of magnitudes, and rounding errors are usually below the accuracy of whatever your input data is, even for f32.

I've debugged my share of NaNs. The nice thing about NaN is that it's infectious: there are very few things you can do to a NaN to turn it back into a number without being very deliberate about it. So if I have a bug in my simulation code that turns something negative that logically should be positive, I'm likely to get an output full of NaNs. This is useless, but it's obviously useless, and I can easily debug the code working backwards to find where the erroneous calculation is, at which point I can usually quickly see things like "oh, I wrote y - x here, that should be x - y."

Zeroes have very little of that useful infectious quality, which means they're terrible when you're debugging. Moreover, zeroes have a tendency to make things silently wrong, like a matrix that looks quite ordinary but turns out to be singular when you try to invert it, so the origin of the problem is not obvious unless you painstakingly examine every step of the process and see where the spurious 0.0 sneaks in. Repeat 10x when the error is in an early pass and doesn't manifest as a problem until late in the process.

NaNs are not the worst case scenario for computational programs. The worst case scenario is bogus data that looks right. I wonder what kind of programs you write? Many people's initial reaction when they first encounter IEEE 754 is "that's weird". But there's no non-weird way of flattening the richness of real numbers into a small number of bits with useful performance characteristics for hardware operations. The best we can do is approximate the properties we care about.

If you find NaNs to be useless, by all means, use some other type. But don't tell me they're "a problem". There's nothing wrong with IEEE 754 except that most people don't really understand the nuanced representational problem it addresses, and that it's by far the most common non-integer form of arithmetic available on real hardware, so people use it incorrectly all the time.

Every tool is weird and abstract when you don't know what it's for, and if you try to use a pipe wrench for turning bolts or whatever you're going to be frustrated. But don't go into the plumbing store and lecture on the problem with pipe wrenches and how they could be fixed to be better at turning bolts.

Re: your example,

This is not the kind of "rounding error" I am chiefly concerned with. The only "error" here is in the decimal representation of these numbers, which are actually represented with much greater precision internally than you have used in this code. As I alluded to earlier, in simulations this is usually fine because the inputs you provide to the simulator are usually of much less precision (being derived from real-world measurements, parametric sweeps, or even just educated guesses) than f64 or even f32 is capable of. If something like this happens internally, the software needs to be written to handle it - not just default to 0. But if it happens because the user deliberately crafted the input values to almost cancel each other out, well, they asked for garbage, so you may as well give them the best garbage you've got.

39 Likes

Yeah I agree you can't change f64, and that IEEE-754 and NaNs go together.

So this is mostly a rant, rationalized by a bit of actionable advice about clamping your input values.

But yeah, there could be another non-IEEE type that is Eq and Ord and implements operations the way I said. I wouldn't call it "sloppy float", I'd call it "robust float" :stuck_out_tongue:

2 Likes

Interesting...

Python bails with nan:

python3
Python 3.9.1 (default, Feb  3 2021, 07:38:02) 
[Clang 12.0.0 (clang-1200.0.32.29)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import math
>>> 1001.3 - 1001.0 - 0.3
-4.546363285840016e-14
>>> math.sqrt(1001.3 - 1001.0 - 0.3)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: math domain error
>>> 

C fails silently and produces nan:

➜  cat nan.c
#include <stdio.h>
#include <math.h>

int main () {
    printf("%f\n", 1001.3 - 1001.0 - 0.3);
    printf("%f\n", sqrt(1001.3 - 1001.0 - 0.3));

}
➜  gcc nan.c
➜  ./a.out    
-0.000000
nan

Javascript silently produces nan:

➜  node
Welcome to Node.js v15.14.0.
Type ".help" for more information.
> 1001.3 - 1001.0 - 0.3
-4.546363285840016e-14
> Math.sqrt(1001.3 - 1001.0 - 0.3)
NaN
> 

It's not clear to me why 1001.3 - 1001.0 - 0.3 is -4e-14 for Python and node.js but -0.000000 for C.

In all cases though we are trying to take the square root of a negative number thanks to the rounding errors inherent in representing decimal numbers in binary. So Nan is a good result.

Why should Rust be different?

I think the argument that NaNs are useful for debugging when you run into them could be instead addressed by some sort of hardware counter, or software/debug-only counter, telling you how many times you had to perform a suspicious operation, such as sqrt(negative).

I have written a good chunk of guidance and navigation software for SpaceX Crew Dragon.

That doesn't seem right. The square root of negative numbers is not real, let alone 0. If we regard floats as an approximation of reals, then the square root of negative numbers simply doesn't exist in the domain of floats. Implicitly making these operations return 0 smells like a footgun, a time bomb that will eventually explode, and I'm not sure it's any better than NaN.

I think it's actually worse, because NaN at least signifies that something went wrong, but saturating to 0 conflates a successful 0 return value and an error, therefore it's much harder to debug.

However, I agree that NaN is a pain. If I were to design Rust today, I would make floats not-NaN by default, and I would equate Option<f32> and Option<f64> with fully-fledged IEEE-754 floats, where the None variant fills the NaN niche.

Then any potentially fallible functions could return Option<f64>, and they would additionally be defined on Option<f64> as well, for convenience. (There could also be panicking versions of these functions, I don't know.) This would then allow me to write guaranteed NaN-free code if I wanted it, while also permitting fast-and-loose coding when that's admissible.

21 Likes

This paragraph makes no sense to me. The example has typical rounding errors (in decimal to binary conversion, and in subtraction). This is normal, not some specially crafted exception.

When you say "software needs to be written to handle it -- not just default to 0" -- returning 0 from (a robust version of) sqrt is the proper way to handle this rounding error, because that creates a continuous extension of sqrt, which is exactly what you want to do if you want to make the calculation robust to handle rounding errors.

It's not real, but close to 0, and if you assume your inputs have a rounding error and are actually positive in infinite precision, then it's the closest approximation you can have to the mathematically precise real answer. That's why it's much more useful in practice to have such a safe-sqrt function that returns 0 than the one that discontinuously jumps to NaN.

The one that jumps to NaN can't be used near 0 in a numerically stable way (unless of course you explicitly check for negatives and correct them to 0 yourself).

But in the example nothing went wrong (well, if you assume a saner sqrt definition). You just encountered a slight rounding error, which is totally expected, and 0 is a fine, well-rounded answer.

No, they are purely imaginary with magnitude sqrt(|x|), so for larger absolute values of x, they are definitely not close to 0.

if you assume your inputs have a rounding error and are actually positive in infinite precision

But that's the point: the library can't make that assumption on behalf of the caller. If I make a calculator app, the user enters sqrt(-100000) and it comes back as 0, they will be upset. NaNs don't always (or even primarily) arise out of rounding errors. It's irresponsible to disregard the myriad of other ways one can get NaN, wing it, and say "they just wanted 0 anyway".

Then the caller should write sqrt(x).unwrap_or(0.0). This is the same point as above – the library won't, can't, know upfront that it will be used in this manner, and only in this manner.

10 Likes

Fair point about larger values that are likely to simply indicate a bug than a result of rounding errors.

Still, for small values you will always have to do the unwrap_or(0.0), that's essentially the only way to make the calculation continuous, i.e. to handle rounding errors.

Even in the calculator scenario, if you do something like 3 + sqrt(1000.3 - 1000.0 - 0.3) users want to see 3 (or something close to 3) rather than "error". The calculator may want to make the judgement call to be something like "if x < -0.001 then say error", otherwise it will still want to do 0 for negative values. In a sense, by choosing 0.0 as the threshold, the library is making that judgement call for the user app, in a wrong place.

No, by choosing 0 as the threshold, the library is making the only technically correct, non-magical decision, giving the user the possibility to coerce NaN or None to 0 if that's required. But it can't know what the acceptable threshold should be, and if it tried to guess, it would lose information. There would be no way to go from 0 to NaN/None, but it is trivial to go from NaN/None to 0.

5 Likes

That's a strong sounding term. If the API says my_library::sqrt(-1.0) = 0, then that's what is technically correct.

I agree that's not IEEE-correct, so I'm definitely departing from IEEE.

There is a way: if you want to error out for even slightly negative values, you can just do:

if x < 0.0 {
   panic!()
} else {
   sqrt(x)
}

My point is, 99% of the time that's not what you really want because you want to be robust to rounding errors.

Let me put it this way: If you are not keeping track of the accuracy of your calculations, you cannot be justified in the belief that NaNs are the result of "rounding error" and not the result of erroneous input or your own flawed reasoning.

1001.3 presumably isn't an infinitely precise quantity. Very few numbers ever are (and the ones that are tend to not cause these kinds of problems). It's a measurement, or a boundary condition, or the outcome of some other process that provides a number, but whatever it is, it has error bars on it, probably on the order of ± 0.05 (since you decided to write it to 1 decimal place). Same goes for 1001.0 and 0.3.

If you're tracking those error bars with every calculation, you find that 1001.3 - 1001.0 - 0.3 is in fact not exactly 0.0 but 0.0 ± 0.15. And if you do the same analysis with f64, you'll find that although the center of the range is ever so slightly below 0, the possible valid range for that computation covers both positive and negative values. In that case, where you've done the analysis to determine the magnitude of your error, it's fine to clamp the result to the range you know is reasonable; i.e., positive numbers.

But it's not OK to do that to arbitrary computations. You need to know where your numbers come from, how precise they are and what they mean to be able to do that analysis. The square root of a negative number isn't always a rounding error.

21 Likes

If you seriously believe that, then there's a billion dollars for you to make by implementing a programming language with the floating point semantics you want.

Indeed. NaN are not a problem by itself and most operations with them are sane. Only things which are obviously wrong are comparisons (who ever seen any algorithm where the fact that NaN is not equal to itself is useful? I've seen many where that's a nuisance).

Of course some platforms are weirder than others when NaNs are concerned. Intel, in particular, does the following:

min(1.0, NaN) = NaN
min(NaN, 1.0) = 1.0
max(1.0, NaN) = NaN
max(NaN, 1.0) = 1.0

Can anyone show me at least one algorithm where such definition of min and max is useful?

4 Likes

I agree: sometimes it's fine, sometimes it's a bug. sqrt can't know.

In cases like that, you want to assume it's not a bug. It could be just fine. Current definition of sqrt assumes it's always a bug and essentially refuses to work, even in cases where it's fine. That's not a useful assumption.

Like I said, there are other ways to provide debugging tools for the bug case.

Is that as a result from Intel's behaviour or Rust's implementation?