Type inference on default type parameter

enum Foo<T, U = T> {

fn main() {
    let foo: Foo<_> = Foo::Right(2.0);

For above code, I can't understand why it compiles. In my understanding, the compiler should not assume T equals to U which is f64.

If I changed the code to below, the compiler fails as expected:

let foo = Foo::Right(2.0);
let foo: Foo<_, _> = Foo::Right(2.0);


You can think of Foo<T, U = T> as defining two types with the same name, but different number of arguments (“arity”)

enum Foo<T, U> { … }
type Foo<T> = Foo<T, T>;

(Pseudo code for demonstration. Of course, doing such overloading manually in Rust doesn’t actually work.)

This explains why Foo<_> makes the compiler assume both of the parameters are equal: You used the unary version of the type constructor, that by definition is always a type of the form Foo<T, T> (with both parameters being equal), whereas Foo<_, _> uses the binary version, leaving both parameters independently unspecified.


On second read… is your issue that T is “inferred” from U instead of the other direction? That’s just how type inference works. It’s never unidirectional. Two types being constrained to be equal always allows inference to go either way. On that note, U = T is not a mere “hint” of the form “if U is unknown, but T is known, then just set U to T”. Such conditions are too hard to be expressed in a way compatible with the type inference algorithm. It’s interesting enough that Rust does (somewhat) support fallback-reasoning for integer and float types (to i32 and f64, respectively), but so far, there’s no support beyond this.

Instead, as explained above, Foo<T, U = T> means, if you write Foo<…> with a single argument, it always equates the two parameters. If either is known, the other is, too. If type inference then also figures out the parameters must be different, there’s a compilation error:

enum Foo<T, U = T> {

fn main() {
    let mut foo: Foo<_> = Foo::Right(2.0); // <- the unary version of `Foo` is annotated, to type params must be equal
    foo = Foo::Left("hello"); // types aren’t equal
error[E0308]: mismatched types
 --> src/main.rs:8:21
8 |     foo = Foo::Left("hello");
  |           --------- ^^^^^^^ expected floating-point number, found `&str`
  |           |
  |           arguments to this enum variant are incorrect
help: the type constructed contains `&'static str` due to the type of the argument passed
 --> src/main.rs:8:11
8 |     foo = Foo::Left("hello");
  |           ^^^^^^^^^^-------^
  |                     |
  |                     this argument influences the type of `Foo`
note: tuple variant defined here
 --> src/main.rs:2:5
2 |     Left(T),
  |     ^^^^

Other typical examples of type inference going in both/any direction(s) is code like

fn main() {
    let x = Default::default();
    let y: u8 = x; // <- this influences the type of x

where the usage of x determines its type, even though the usage comes later in the code. The let y: u8 = x thus determines that the previous line called u8::default(), and not any other types’s Default implementation.


Just to re-emphasize this, that's what makes it type inference, in my book.

A type system where type information only flows one way (like auto in C++, or var in C#) is a fundamentally different thing from "proper" type inference.

1 Like

I think discussion of that is “proper” type inference would quickly devolve into true scottsman discussion.

At least type inference in C++ is powerful enough to deduce that if you say that something satisfies Foo it, indeed, satisfies, Foo. In Rust something like this may not compile:

trait Bar {}
impl<T: Foo> Bar for T {}

In Rust… nope, it's no guaranteed. And you can get different functions called if you change order of requirements in generics.

What's better: type inference which can reliably go forward or the one which may also go back, but only if stars are aligned just right?

I didn't say type deduction is bad. It's just different. Design choices exist.

Having a typeof(expr) construct work much better in a language with deduction but not inference, for example.

This is simply… well, it’s how where bounds on traits work! In order to be allowed even to ask the question whether T: Foo, you must fulfill all where bounds (except for supertrait bounds; supertrait bounds are special/different). This is similar to how with a struct

struct Foo<T>(T) where i32: From<T>;

in order to be allowed to even mention the type Foo<T> you must fulfill the i32: From<T> bound. Or compare the natural in-between: a (non-Self) type parameter on a trait.

trait Foo<T>: Sized where i32: From<T> {}

trait Bar<T> {}
impl<T, S: Foo<T>> Bar<T> for S {}

same issue.

The order-dependence issue you linked on the other hand is pretty funny.

But I'm not asking. I'm asserting: T is Foo. Why my assertion is not enough?

And that's why it's marked as bug, not feature request.

And where is that documented?

And what happened to “true type inference” and it's ability to “go back”?

That I can agree with.

I don't think so. In a language which does proper Prolog-style type-inference use of typeof(expr) would just add one more rule to the resolver consideration and shouldn't affect much.

In language where resolver is not starts with logical predicates and then resolves everything with guaranteed detection of ambiguity but uses one, fixed, undocumented way of doing type-inference… in such language typeof(expr) would be problematic indeed.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.