Floating point number tricks

Reading a book about Rust i found this code

fn main() 
{
    assert!(0.1 + 0.2 == 0.3)
}

the author says that this code compiles (true) and runs without error but i get:

thread 'main' panicked at 'assertion failed: 0.1 + 0.2 == 0.3', r448.rs:3:5
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace

The author also says that under the hood Rust uses of the epsilon constant for a comparison so no error.

Where is the truth?

Thx.

1 Like

Not sure.

Does it really matter though? One should never be comparing floats for equality anyway.

2 Likes

Just for academic purpose. I like to understand something about compiler internals. And i'm also a bit surprise about what the book is telling.

If you run Clippy over the code you get a warning:

error: strict comparison of `f32` or `f64`
 --> src/main.rs:3:13
  |
3 |     assert!(0.1 + 0.2 == 0.3)
  |             ^^^^^^^^^^^^^^^^ help: consider comparing them within some margin of error: `(0.1 + 0.2 - 0.3).abs() < error_margin`
  |
  = note: `#[deny(clippy::float_cmp)]` on by default
  = note: `f32::EPSILON` and `f64::EPSILON` are available for the `error_margin`
  = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#float_cmp

For reference: you can always run clippy easily by choosing it in the “Tools” menu here: Rust Playground

I can’t for sure confirm that the claim in the book is false, but I’ve never heard of that and Clippy seems to warn about doing such a comparison.

4 Likes

This is wrong, Rust does not do that. Float equality is just normal float equality according to IEEE 754.

16 Likes

Bad thing .... this is not a simple typo

thx.

It would be quite problematic if Rust did do this, because it is almost never correct to use a constant value for float error margins. E.g., if you're doing a calculation using very small numbers, the large value of epsilon will be too generous, and if you're doing a calculation with large numbers, it will be too strict. {f32,f64}::EPSILON should only be used when your calculation is performed with values to lie near 1.0. Otherwise, you'll want to calculate a relative error margin.

5 Likes

In summary: The author of your book is incorrect. 0.1, 0.2, and 0.3 do not have exact representations in fpN for any N (e.g., fp32 or fp64), so most computations involving them are unlikely to be exact.

1 Like

The truth is that this surprising result is correct from perspective of floating-point numbers:

https://0.30000000000000004.com/

6 Likes

What Every Computer Scientist Should Know About Floating-Point Arithmetic

8 Likes

https://members.accu.org/index.php/articles/1558 has an easy to follow description what's happening internally with FP representation.

1 Like

Very interesting.
Thx.

This sentiment is simply not true. If you're writing high performance numeric code, as in many applications of scientific computing, you may often find that comparing floating point values for equality is entirely appropriate. Obviously when doing so, one must be fully aware of the floating point representation implementation details.

3 Likes

It can definitely make sense to compare floats for equality. For example, maybe you're repeatedly updating the float to larger values, and you want to know if you've made an improvement since some other time. Then the float equality is fine because it is the exact same float.

1 Like

I can't see how it's possible that comparing IEEE floats for equality, with ==, can ever be relied on.

Why?

Because operations on floats are not commutative: i.e. a Ă— b may not equal b Ă— a.

Because operations on floats are not associative: i.e. a+(b + c) may not equal a + (b + c).

And a bunch of other reasons... In general rounding errors make things that one would expect to be equal not. The Floating-Point Guide - Comparison. The same value calculated via different methods may not come out the same.

Presumably the test one want in that kind of example is < or > not ==.

About the only times I can see test for equality being required is in validation. Does this new floating point unit produce the exact same output bits for the same input bits as some other floating point unit?

2 Likes

You may want to check if a value has changed to decide whether or not to repeat some expensive calculation, for example. In this case, you can == compare an cached copy of the input value with the current one.

5 Likes

The approx crate is an option.

Floating point addition and multiplication are commutative (but not associative, as you say).

2 Likes

Are they? I mean, NaN + NaN and NaN * NaN both produce NaN which is clearly not equal to NaN. :wink:

9 Likes

OK, that sounds like a reasonable use of floating point ==.

I'm not convinced.