How are f64 literals handled by the compiler?


As far as I understand, the reason that this gist fails : Rust Playground

is that somehow, the compiler doesn't use f64 to store the literals I am using in the actual assignment ?

I'm really not sure about why this test fails, because to me it feels like it should pass (i.e. doing an operation manually with literals or writing the literal directly should give the same result within T::EPSILON)

Can someone explain to me why this test fail please ? I'm at a loss

Why should it necessarily be within EPSILON of the literal?

Maybe it shouldn't, but at that point is there any specification for the error that I can expect from this ?

I believe that a literals and simple operations such as plus and minus guarantee that the result is the floating point value closest to the true answer.

Thanks for the answer, I'll try to look into specs if I find them, now I'm curious

Turns out my initial Rust playground is pretty bad methodology anyway, I should have compared the relative error to EPSILON instead of the absolute one I guess, and all is good in the assert! and in my mind

Generally, the problem you're running into is that the available precision of a floating point will halve every time the value becomes larger than a power of two, so values between 16 and 32 have half the precision of the numbers between 8 and 16. Hence, the 22.4 literal is going to be off by approx. twice what the 14.4 literal is off by. (The literal 8.0 is exact)

The value of EPISILON is the precision of numbers between 1 and 2.


Check the float_eq crate for more info and good fuzzy equality -- you're looking for ulps <= 1.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.