the author says that this code compiles (true) and runs without error but i get:
thread 'main' panicked at 'assertion failed: 0.1 + 0.2 == 0.3', r448.rs:3:5
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace
The author also says that under the hood Rust uses of the epsilon constant for a comparison so no error.
If you run Clippy over the code you get a warning:
error: strict comparison of `f32` or `f64`
--> src/main.rs:3:13
|
3 | assert!(0.1 + 0.2 == 0.3)
| ^^^^^^^^^^^^^^^^ help: consider comparing them within some margin of error: `(0.1 + 0.2 - 0.3).abs() < error_margin`
|
= note: `#[deny(clippy::float_cmp)]` on by default
= note: `f32::EPSILON` and `f64::EPSILON` are available for the `error_margin`
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#float_cmp
For reference: you can always run clippy easily by choosing it in the “Tools” menu here: Rust Playground
I can’t for sure confirm that the claim in the book is false, but I’ve never heard of that and Clippy seems to warn about doing such a comparison.
It would be quite problematic if Rust did do this, because it is almost never correct to use a constant value for float error margins. E.g., if you're doing a calculation using very small numbers, the large value of epsilon will be too generous, and if you're doing a calculation with large numbers, it will be too strict. {f32,f64}::EPSILON should only be used when your calculation is performed with values to lie near 1.0. Otherwise, you'll want to calculate a relative error margin.
In summary: The author of your book is incorrect. 0.1, 0.2, and 0.3 do not have exact representations in fpN for any N (e.g., fp32 or fp64), so most computations involving them are unlikely to be exact.
This sentiment is simply not true. If you're writing high performance numeric code, as in many applications of scientific computing, you may often find that comparing floating point values for equality is entirely appropriate. Obviously when doing so, one must be fully aware of the floating point representation implementation details.
It can definitely make sense to compare floats for equality. For example, maybe you're repeatedly updating the float to larger values, and you want to know if you've made an improvement since some other time. Then the float equality is fine because it is the exact same float.
I can't see how it's possible that comparing IEEE floats for equality, with ==, can ever be relied on.
Why?
Because operations on floats are not commutative: i.e. a Ă— b may not equal b Ă— a.
Because operations on floats are not associative: i.e. a+(b + c) may not equal a + (b + c).
And a bunch of other reasons... In general rounding errors make things that one would expect to be equal not. The Floating-Point Guide - Comparison. The same value calculated via different methods may not come out the same.
Presumably the test one want in that kind of example is < or > not ==.
About the only times I can see test for equality being required is in validation. Does this new floating point unit produce the exact same output bits for the same input bits as some other floating point unit?
You may want to check if a value has changed to decide whether or not to repeat some expensive calculation, for example. In this case, you can == compare an cached copy of the input value with the current one.