Here's my favourite site for understanding this stuff:

0.3_f32 https://float.exposed/0x3e99999a

0.3_f64 https://float.exposed/0x3fd3333333333333

##
In `f32`

The closest `f32`

s to 0.1, 0.2, and their sum in the reals are *exactly*

```
1.00000001490116119384765625×10⁻¹
+ 2.0000000298023223876953125 ×10⁻¹
------------------------------------
3.00000004470348358154296875×10⁻¹
```

Whereas the f32 closest to 0.3 and the values before and after it are exactly

```
prev: 2.999999821186065673828125×10⁻¹
0.3: 3.00000011920928955078125 ×10⁻¹
next: 3.000000417232513427734375×10⁻¹
```

Of those three possibilities, the one that can be reasonably displayed as "0.3" is closest, so that's what you see.

(If you ask Rust to shot that next value, perhaps via `0.3_f32.next_up()`

in nightly, then you'll see that it displays as "0.30000004", which is just enough digits to distinguish it from the the previous f32.)

##
In `f64`

The numbers are longer, but we can do the same thing

```
1.000000000000000055511151231257827021181583404541015625×10⁻¹
+ 2.00000000000000011102230246251565404236316680908203125 ×10⁻¹
----------------------------------------------------------------
3.000000000000000166533453693773481063544750213623046875×10⁻¹
```

But whereas in `f32`

0.3 *also* rounded up to get to a float, in `f64`

it rounds *down*, and those three under-½ULP-differences end up being *just* enough to matter:

```
prev: 2.9999999999999993338661852249060757458209991455078125 ×10⁻¹
0.3: 2.99999999999999988897769753748434595763683319091796875×10⁻¹
next: 3.000000000000000444089209850062616169452667236328125 ×10⁻¹
```

It turns out the sum above is *exactly* the midway point between `0.3_f64`

and the next value, so it does bankers' rounding to the one that has a zero as the last bit, which turns out to be the higher one, so you see "3.0000000000000004", the decimal with the fewest sigfigs that parses to that exact f64 value.