Understanding limitation of clippy::needless_borrow

The documentation of this lint states one known problem of the lint:

The lint cannot tell when the implementation of a trait for &T and T do different things. Removing a borrow in such a case can change the semantics of the code.

But I am unsure of why this is true, when Deref coercion is done to all reference types.
Consider the following (playground):


struct T;

trait A {
    fn a(&self) -> usize;
}

impl A for &T {
    fn a(&self) -> usize {
        7
    }
}

impl A for T {
    fn a(&self) -> usize {
        2
    }
}

fn main() {
    let t = T {};
    assert_eq!(t.a(), (&t).a());
}

In this case, adding the &T impl block does nothing, while removing T does not compile. Does "changing the semantics of the code" only refer to the case where only &T implements a trait and not T? If not, what are the other false positives that the lint flags?

1 Like

The failing assertion corresponding to your two implementations would be:

assert_eq!((&t).a(), (&&t).a());

Change your trait to the following to make your assertion fail:

trait A {
    fn a(self) -> usize;
}

I see! So the first one forces the &&T type to deref into the &T method first (see Method-call expressions), while the second one takes ownership to prevent Deref.

Then, why does &T not use the already-implemented A trait, but instead prioritizes the Deref-ed T implementation, assuming that A is an inherent method on T?

In impl A for T, Self = T and self: &Self, i.e. self: &T. There is no dereferencing here, your &T is directly passed to the function as the receiver self.

In impl A for &T, Self = &T and thus self: &&T. This function takes an &&T receiver.

3 Likes

Aha, i get it now. Thanks for replying!