That's just a rustc warning, not an error.

We might have to debate what we think should be core functionality.

I have no opinion yet in this case but I'm all for the Rust idea of keeping as much junk out of std as possible. The fact that what we have in Rust now behaves as the IEEE 754 spec says it should and like many other languages is enough for me. So far...

True.

Now how do I get such a "warning" at run time? That might be nice.

You could create a wrapper type...

Grrr....

Is it really so that after building computers for 7 decades or so we cannot get a processor to raise it's hand and say "I can't sensibly do what you have asked", a signal of some kind, and our languages do something sensible with it?

You know, like addition results that do not fit in the registers an more, comparison of numbers with not numbers and so on.

We do it for integer divide by zero, what about the others?

This might be a bit controversial: Intel f'ed up when they decided on that behavior and handed it to ieee. It's all been backwards compatibility since then, except for some processors which inverted signalling/non-signalling NaN by mistake.

And then there was Rust. A language that emphasises correctness, that tries very hard to prevent one accidentally getting silly results. We have array bounds checks, we have checks for out of range integer calculations (admittedly optional), surely we can expect Rust to have checks for NaN's, infinities, etc?

I could understand if Rust had chosen either direction. I do have an untested suspicion that NaN checks can't be optimized away as nicely as array bounds checks can be.

I guess there is the tension. Does one want optimisation and performance or does one want correctness?

That tension has always been there. Tony Hoare tells of how customers wanted the performance of Fortran rather than the correctness of his ALGOL compiler. Despite the fact their software was full of potential bugs, as shown when their code was transpiled to ALGOL and then rejected by the ALGOL compiler!

The problem is that every hardware architecture has a different kind of NaN check behavior, some of which involve "let's report it to the OS!" and now you're writing a SIGFPE handler, and that crashes the program by default, so then the OS flips the switch in the hardware to "shut up, just produce a NaN and don't bother me about it" and you get about where we are today.

Equality of real numbers is undecidable. 1.999 recurring and 2 are intensionally unequal (they're different strings of digits) but extensionally equal (those strings represent the same thing). It's logically impossible to have an algorithm that correctly tells you whether two arbitrary real numbers are equal or which is larger than the other. This is true regardless of how you choose to represent real numbers. So while `a == b`

may be meaningful as a logical proposition it's not meaningful as a computable operation that returns `bool`

.

All the problems with floating point numbers stem from the fact that they're trying to do something inherently nonsensical - modelling real numbers and then endowing them with operations like `==`

and `<`

. I think this is what @worik was getting at.

No. The problems in this thread come from NaNs. It's very possible to exclude NaN values, and then you get a total order.

```
println!("{}", 0.0 < 0.4 + 0.2 - 0.6); // prints true
```

Your total order is broken.

Real numbers don't have decidable total ordering. Pretending they do leads to problems.

This produces a number which still has a total order.

This is meaningless. The concept of a "decidable problem" or "undecidable problem" applies to decision problems. Decisions problems are functions from finite strings to "yes/no" bits. There is no 1-1 encoding of real numbers into finite strings. Hence, this statement makes no sense.

What's "broken" here (by your definition) isn't the ordering, it's the addition and subtraction.

However, it's not really broken. Nobody claims arithmetic operations introduce no rounding error.

But we're not even talking about arithmetic operations like + and -, we're talking about comparisons, and those create a total order.

That is a rather mind blowing assertion.

I would counter that it is impossible for me to give you two arbitrary real numbers for you to compare even if you had such an algorithm. Therefor the question of whether such an algorithm exists or not is meaningless.

Logically or otherwise impossible? Well...

A physicist would say that if I tried to create arbitrary real numbers I would have so many bits in my space that it would all collapse into a black hole. Or that the universe is just not big enough in space or time for me to create most such numbers.

As for logically, most real numbers are transcendental. Do we even know how to generate arbitrary transcendental numbers?

I don't know.

Seems to me I have been reading these kind of discussions about floating point math for 40 years or so now. The outcome has always been the same.

Suggest: "What Every Computer Scientist Should Know About Floating-Point Arithmetic" https://www.itu.dk/~sestoft/bachelor/IEEE754_article.pdf

40 years only? This dispute started more than 300 years ago between few mathematical schools and it's still haven't concluded.

There are orderable β, non-orderable βΜ which floating point are trying to emulate (but failing, of course), also βΜ and many others.

When people start talking about real numbers and then try to write actual programs to prove their pointβ¦ you know they don't know what they are talking about.

You know, in programming circles.

I have heard that Pythagoras and/or the Pythagoreans had their minds blown by the discovery that the square root of 2 was not rational in 500BC or so.

Thanks Hippasus, see what trouble you caused

Note that βΜ is actually orderable. It's NaN, representing out-of-domain values, that causes floating point's ordering problems.