DRYing nearly identical implementations for &T and &mut T

What if I define a dereference trait, something like:

trait Dereference {
    type ValueType;
    fn deref(self) -> Self::ValueType;
}

impl<T> Dereference for T {
    type ValueType = T;
    fn deref(self) -> Self::ValueType {
        self
    }
}

impl<'a, T> Dereference for &mut T {
    type ValueType = T;
    fn deref(self) -> Self::ValueType {
        *self
    }
}

fn test<T>(t : T, s : T) where T : Dereference, <T as Dereference>::ValueType : AddAssign<T::ValueType> {
    let mut u = t.deref();
    u += s.deref();
}

There is a trait that dereferences a reference, with an associated type that is the dereferenced type, then we define dereferencing a non-reference variable as the identity function. This looks like it should work, except it causes a compiler crash, so I am not sure if it should work, or if there is something wrong with the code. It would also be nice if functions could be l-values like in C++, so you could write:

t.deref() += s.deref();

Could this work?

The first issue with those impls is that &mut T also meets the definition T, and therefore those implementations conflict. There's no way to bound T so that it doesn't include &T and &mut T.

The second issue is that you can't return ownership of T from an &mut T, and so you will get borrow check errors with the second impl.

This blanket implementation conflicts with the other:

This needs to be written &'a mut T. You can't move out of a reference though, so this can only work for T: Copy, or you could write it with T: Clone and self.clone() instead.

Don't know about your crash -- the playground isn't crashing, at least. But a compiler crash is almost always worth filing a bug.

Is that still true with specialisation enabled?

The '*' operator can return a value from a reference, so why can't a user defined trait?

It can't in general, only for Copy types. Otherwise it would be a move, which would invalidate the reference's original memory.

Sorry, I keep forgetting to reply to the specific post. If "s" has type "&mut T" what is the type of "*s"?

(IMO broken reply chains are better than all the withdrawn posts.)

* &mut T -> T, but it won't be allowed by the compiler in general. You can get away with it in cases like *x += y because AddAssign::add_assign turns it back into &mut, so no actual copy/move is attempted.

Unfortunately, you can't specialize associated types, so the ValueType for the &mut T impl has to be &mut T.

Point about withdrawn posts noted, it was not obvious that delete would not actually delete.

What is the reasoning behind allowing '*t' as an l-value with that behavour, but not user functions like:

s.deref() += 3

Trying something simpler, the following almost works:

    trait Dereference {
        type ValueType;
    }

    impl<T> Dereference for T {
        default type ValueType = T;
    }

    impl<'a, T> Dereference for &'a T {
        type ValueType = T;
    }

    fn test_source<T>(t : T, s : T) -> <T as Dereference>::ValueType
    where T : Dereference + Add<Output = <T as Dereference>::ValueType> {
        t + s
    }

    fn test() {
        let u = 3;
        let v = 2;
        println!("{}", test_source(&u, &v));  // this works
        println!("{}", test_source(u, v));    // this does not work
    }

The second println! fails with the trait elements::core::fmt::Display is not implemented for the type <_ as test::Dereference>::ValueType which seems odd as it can determine 'Display' in the first case where references are passed.

Naively it seems the compiler has a problem when the associated type is the same as the type the trait implementation is for, which is odd as they should be type aliases in that case.

Anyone have any idea what is going on here?

Edit: If I comment out the implementations one at a time (and remove the 'default' keyword) it works for both cases, so I would guess this is a bug in the new specialisation feature. Reading this it seems similar to the bug near the top: Implement RFC 1210: impl specialization by aturon ยท Pull Request #30652 ยท rust-lang/rust ยท GitHub but its not clear whether it is supposed to be fixed. It looks like this should work when the compiler is fixed.

Its a shame the star dereference operator is not defined as the identity function on a non-reference (and has a associated type for the dereferenced type) as that would make it much easier to write generic functions. Failing that returning l-values from functions would allow us to replace the standard '*' dereference operator with something else.

I think it may be intentional? Though I'm not really sure how such behavior makes sense with concrete types. https://github.com/rust-lang/rust/pull/30652#issuecomment-194210403

Edit: from RFC: impl specialization by aturon ยท Pull Request #1210 ยท rust-lang/rfcs ยท GitHub

[..]with the following unresolved questions to be firmly settled before stabilization:
[..]

  • When should projection reveal a default type? Never during typeck? Or when monomorphic?
  • Current answer: never during typeck

So why does it work with the by reference example? I think the problem isn't to do with concrete types but to do with that it cannot resolve <T as Dereference>::ValueType is the same type as T. Neither are concrete types. Then at the level of the println! Is knows T is i32.

Note it works fine if you delete either implementation, and the default keyword.

In the reference example, <&i32 as Dereference>::ValueType can be "seen through" because it's defined by a non-default impl.

So what is the justification for a default impl to behave differently?

In Haskell associated types are implemented using type families, which is always sound. Types that are aliases of each other are always considered identical. I think Rust tries to form a lattice of the types to check soundness of associated type projection. In any case this example is sound as the associated type is functionally dependent on the type parameters of the trait (which is the critical condition).

I think a sound type checker has to always consider type aliases identical as they are equivalence classes of types, otherwise type equality is unsound.

Looking into this I think I understand why the associated type does not work when it is default (the type family formed from the impl type parameter and the associated type is conflicted).

It would seem this could be disambiguated with a negative trait bound, by defining a trait for References:

    trait Reference {}
    impl <'a, T> Reference for &'a T {}

    trait Dereference {
        type ValueType;
    }

    impl<T> Dereference for T where T : !Reference {
        type ValueType = T;
    }

    impl<T> Dereference for T where T : Reference {
        type ValueType = T;
    }

With the rest as before. Is there an experimental feature in nightly that I could enable to do this, or what seems the best candidate RFC for merging that I could support for this?

This looks interesting:

    trait NotReference {}
    impl NotReference for .. {}
    impl<'a, T> !NotReference for &'a T {}

    trait Dereference {
        type ValueType;
    }

    impl<T> Dereference for T where T : NotReference {
        type ValueType = T;
    }

    impl<'a, T> Dereference for &'a T {
        type ValueType = T;
    }

But the type system does not seem to realise that the two impl for Dereference cannot overlap. Mutually exclusive traits might work:

    trait Reference : !NotReference {}
    impl<'a, T> Reference for &'a T {}

    trait NotReference : !Reference {}
    impl NotReference for .. {}
    impl<'a, T> !NotReference for &'a T {}

    trait Dereference {
        type ValueType : ?Sized;
    }

    impl<T> Dereference for T where T : NotReference {
        type ValueType = T;
    }

    impl<T> Dereference for T where T : Reference + Deref {
        type ValueType = T::Target;
    }

What do you think?

A naive negative bound proposal (which, if I'm not mistaken, is what this is based on) is not viable unfortunately. If by !Reference, you mean any type that doesn't implement Reference, the problem is that this makes it a breaking change to implement Reference for any type in your library, because types will no longer meet that bound. While this makes sense for a trait like Reference (its pretty fundamental to the notion of the type), it doesn't make sense for many traits, such as Display for example.

Rust may someday get negative bounds with a different meaning that Niko Matsakis aptly compared to intuitionistic logic: types are by default neither Trait nor !Trait, but can have either implementation defined for them.

Type systems are a logic, and yes HM most closely resembles an intuitionistic logic. Rust even has backtracking in trait satisfaction. If you start with the pure subset of Prolog (without negation) you have pretty much the current Rust type system (where the logic language operates on the types of variables, logic variables directly correspond to type variables) and you should be able to write a large class if logic programs in the type system using traits like Peano number arithmetic etc. Negation is difficult to deal with if you are trying to remain sound in the Herbrand universe, as negation as failure does not work. Opt-out traits mirror constructive negation and so I think are sound. Type disequality is also sound as is type equality.

There is a restriction whereby a trait defined in a different module can only be defined for a type declared in the same module, so nobody can make any base type or type in my module suddenly a reference, or Display for that matter. If I wanted to print "foo" for every non reference type that would seem reasonable, and if this were implemented statically using generics, then recompilation would be necessary if the status of anything changed. However because of the above restriction it would require editing my module to make such a change, which would then require recompliation in any case. Further and code that depends on a type T being a reference would not be called if T became a reference which would require recompilation to fix. So what is the problem with negative trait bounds, (you could even restrict them to traits defined in the same module)?

In any case I can define Reference and NotAReference in the current nightly the only problem is the compiler does not recognise them as mutually exclusive. The last example is based on your own RFC for mutually exclusive traits. Are you saying that the mutex-traits RFC is now withdrawn and not going to happen?

All this is simply a work around for the real problem, and that is non-reference variables are clearly a distinct non-ovetlapping type. We as programmers know this, but the type system does not. Really 'move' variables should have their own constructor, so you would have something like:

& T
&mut T
@ T 

Where @T replaces a plain T. But this would cause backward compatibility problems. The smallest fix for this would probably be a special type pattern-match operator (I'm just using @ as an example, could be anything) which only matches a non-reference:

trait Test {
    type ValueType;
}

impl<T> Test for @T {
    type ValueType = T;
}

impl<'a, T> Test for &'a T {
    type ValueType = T;
}

impl<'a, T> Test for &'a mut T {
    type ValueType = T;
}

So the compiler knows the impl do not overlap, and it then does not require specialisation. This only fixes this one case, and whilst negative trait bounds is something I would like to see (even if restricted to traits defined in the same module), this specific problem is more of an immediate blocker to what I want to do.

That RFC was closed as postponed because the lang team wanted to have an implementation of specialization before looking at other coherence extensions. There has never been an implementation of mutually exclusive traits so far.

It is true that we do not have a way of defining "variables that are not an &'a T", but its worth remembering that, using your syntax, &'a T is actually the same as @&'a T: the ampersand is just a very special type constructor.

However, I also don't really understand why you would want this abstraction. Abstracting over &'a and &'a mut I understand, but if you have ownership of a type, you can pass it by reference. This is a lot of complexity for just removing an ampersand sigil from some arguments.