# Why does println! print different outputs for f32 and f64?

Given this code:

``````let a: f64 = 0.1 + 0.2;
let b: f32 = 0.1 + 0.2;
let c: f32 = 0.30000000000000004;
let d: f32 = 0.300000012;

println!("{}", a); // 0.30000000000000004
println!("{}", b); // 0.3
println!("{}", c); // 0.3
println!("{}", d); // 0.3
``````

Why does Rust only print the full floating point value (`0.30000000000000004`) for `f64`?

I know that in memory, variables `a` and `b` will not be exactly equal to 0.3. But I can't understand why Rust prints as it is in memory for `f64`, but for `f32` it uses a different approach and only prints `0.3`.

As I understand it Rust will round floats as much as possible while still being able to roundtrip to the original value. In the cases of b, c and d the in memory representation of the float is close enough to 0.3 that it can display it as 0.3 without loosing any precision. However for a there is a slight rounding error making it just a tiny bit larger such that 0.3 would result in a different float value and as such it has to show the extra decimals.

6 Likes

You might also like this visualisation if the raw float bits to see why 0.1 + 0.2 as f32 gives 0.3, but not for f64: Rust Playground

1 Like

Here's my favourite site for understanding this stuff:
0.3_f32 https://float.exposed/0x3e99999a
0.3_f64 https://float.exposed/0x3fd3333333333333

## In `f32`

The closest `f32`s to 0.1, 0.2, and their sum in the reals are exactly

``````      1.00000001490116119384765625​×10⁻¹
+ 2.0000000298023223876953125  ​×10⁻¹
------------------------------------
3.00000004470348358154296875×10⁻¹
``````

Whereas the f32 closest to 0.3 and the values before and after it are exactly

``````prev: 2.999999821186065673828125×10⁻¹
0.3: 3.00000011920928955078125 ×10⁻¹
next: 3.000000417232513427734375×10⁻¹
``````

Of those three possibilities, the one that can be reasonably displayed as "0.3" is closest, so that's what you see.

(If you ask Rust to shot that next value, perhaps via `0.3_f32.next_up()` in nightly, then you'll see that it displays as "0.30000004", which is just enough digits to distinguish it from the the previous f32.)

## In `f64`

The numbers are longer, but we can do the same thing

``````      1.000000000000000055511151231257827021181583404541015625​×10⁻¹
+ 2.00000000000000011102230246251565404236316680908203125​ ×10⁻¹
----------------------------------------------------------------
3.000000000000000166533453693773481063544750213623046875×10⁻¹
``````

But whereas in `f32` 0.3 also rounded up to get to a float, in `f64` it rounds down, and those three under-½ULP-differences end up being just enough to matter:

``````prev: 2.9999999999999993338661852249060757458209991455078125​ ×10⁻¹
0.3: 2.99999999999999988897769753748434595763683319091796875​×10⁻¹
next: 3.000000000000000444089209850062616169452667236328125​  ×10⁻¹
``````

It turns out the sum above is exactly the midway point between `0.3_f64` and the next value, so it does bankers' rounding to the one that has a zero as the last bit, which turns out to be the higher one, so you see "3.0000000000000004", the decimal with the fewest sigfigs that parses to that exact f64 value.

6 Likes

Or, to answer this question directly, because that's way too much precision to fit in an `f32`.

The representable values go from `0.300000011920928955078125` to `0.3000000417232513427734375`, which is a change of about 3e-8, so your attempt to adjust the value of the f32 by 4e-17 can't possibly do anything.

You get about 7 decimal sigfigs with `f32`, and about 16 decimal sigfigs with `f64`. If you try to use `0.300000000000000004_f64` you'll see that it's also trying to use too much precision, so doesn't actually differ from `0.3_f64`.

1 Like

Here is how it looks in binary.

0.1 = 0.0(0011)
0.2 = 0.(0011)
0.3 = 0.01(0011)
(the part in parentheses recurring)

`f32` has 24 bits of precision:

``````0.1 = 0.000110011001100110011001101
0.2 = 0.00110011001100110011001101
-----------------------------
0.010011001100110011001100111
after rounding:
sum = 0.0100110011001100110011010
0.3 = 0.0100110011001100110011010
``````

So in `f32`, 0.1 + 0.2 = 0.3

`f64` has 53 bits of precision:

``````0.1 = 0.00011001100110011001100110011001100110011001100110011010
0.2 = 0.0011001100110011001100110011001100110011001100110011010
----------------------------------------------------------
0.01001100110011001100110011001100110011001100110011001110
after rounding:
sum = 0.010011001100110011001100110011001100110011001100110100
0.3 = 0.010011001100110011001100110011001100110011001100110011
``````

So in `f64`, 0.1 + 0.2 != 0.3

1 Like

There's an error in this playground. You might wanna change the 3.0f64 to 0.3f64.

My bad. Thanks for catching this!

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.