Why this convert can work?

Convert a f64 to f32 like this fails expectly:

let a: f64 = 1.0;
let b: f32 = f32::from(a);

[E0277] Error: the trait bound `f32: From<f64>` is not satisfied
   ╭─[command_354:1:1]
   │
 2 │ let b: f32 = f32::from(a);
   ·              ────┬──── ┬  
   ·                  ╰──────── required by a bound introduced by this call
   ·                        │  
   ·                        ╰── the trait `From<f64>` is not implemented for `f32`
───╯

But this can work, why?

f32::from(1.0)

The literal is inferred to be f32 so that it will work.

1 Like

Because the literal doesn't have a type per-se, just that it is a floating literal. The type inference happens in the expression: f32::from(1.0) and not in the literal. The inference knows that a floating point literal's type can be inferred either as f32 or as f64. It also knows that there is an impl of From<f32> for f32 (this is a blanket impl, all types have this). Hence, it infers the type of 1.0 to be f32.

4 Likes