How does the compiler distinguish the method call when multiple candidates exist?

Consider this case

use std::ops::Deref;
fn main(){
    let s:&String = &String::new();
    let ss = s.deref();  // #1
}

It appears to me, the candidates at #1 have at least two. For simplification to discuss, I just list two.

According to Method-call expressions, the receiver's type is &String, So, the list of candidate receiver types at least consists of {String, & mut String, &String, &&String, & mut &String, }.

The standard has at least two viable traits

impl ops::Deref for String {
    type Target = str;

    #[inline]
    fn deref(&self) -> &str {
        unsafe { str::from_utf8_unchecked(&self.vec) }
    }
}

impl<T: ?Sized> const Deref for &T {
    type Target = T;

    #[rustc_diagnostic_item = "noop_method_deref"]
    fn deref(&self) -> &T {
        *self
    }
}

&String exactly matches the first while &&String exactly matches the second. However, the candidate chosen at #1 by the compiler is the first one. How does the compiler choose the second one as the best candidate?

Incidentally,

Obtain these by repeatedly dereferencing the receiver expression's type

This rule is a bit circular definition, considering the type of receiver we are looking up the deref candidate for the type, such as the example in the rust reference

For instance, if the receiver has type Box<[i32;2]>, ..., [i32; 2] (by dereferencing),

The dereference of Box<[i32;2]> to get [i32; 2] itself depends on which candidate would be chosen for dereferencing Box<[i32;2]>, How is the compiler sure to decide this candidate

impl<T: ?Sized, A: Allocator> const Deref for Box<T, A> {
    type Target = T;

    fn deref(&self) -> &T {
        &**self
    }
}

as the best candidate for the result of obtaining receiver-type candidates when we are looking up the candidate methods for Box<[i32;2]>?

Continuing the discussion from Method call resolution behaviour:

&String is the first element in the ordered list of candidate receiver types, so once it has a match, the matched one will be chosen, Right? But, how do you interpret the second question?

The dereference of X literally means the dereference of X. There's no candidate resolution here.

1 Like

For raw-pointer or reference type may it be only considered the internal meaning. How about the smart pointer type? For Box<i32>, how does the compiler know that the dereference of Box<i32> will get i32 rather than other things? In other words, how does the compiler determine

impl<T: ?Sized, A: Allocator> const Deref for Box<T, A> {
    type Target = T;

    fn deref(&self) -> &T {
        &**self
    }
}

is the best candidate for dereferencing Box<i32> when we are processing which candidate would apply to Box<i32>.deref()?

Because * is Deref in std::ops - Rust :

Used for immutable dereferencing operations, like *v .

In immutable contexts, *x (where T is neither a reference nor a raw pointer) is equivalent to *Deref::deref(&x).

In order to make the second question clearer, consider this case

let b = Box::new(0);
let c = b.deref(); // #1

When resolving the method call at #1, we should for the list of candidate receiver types. Fist, Box<i32> is the candidate, which is exactly the type of b. Then, the rust reference says

Obtain these by repeatedly dereferencing the receiver expression's type

So, dereference of Box<i32> itself requires which candidate deref() is called for Box<i32>, this seems to fall into a paradox.

I don't understand the question, since it chooses the first one.

From your follow-up posts I think you get it but I wrote it up anyway.

Dereferencing and then "or & or &mut gives

  • &String
    • &&String
    • &mut &String
  • String
    • &String (again)
    • &mut String
  • str
    • &str
    • &mut str

&String comes first so the impl that takes &String wins.


I suppose the language is a little loose, but in this context it really means "considering the type that results from a dereference", which is statically knowable. It's not "the type you'd get back by calling .deref(), which invokes method resolution", it's <T as Deref>::Target. It's not cirucular as method resolution isn't performed. * doesn't do method resolution either.

(Side note, for references and Box [1], it's a language level operation that allows for things like borrow splitting.)


  1. and maybe raw pointers but those are unsafe anyway ↩︎

1 Like

Yeah, see my last comment.

It's not actually calling deref or doing *, but even if it did -- to make it more explicit --

   <SomeKnownType as SomeTrait>::method(arg)
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

the fully-qualified form [1] makes the method [2] in question unambiguous. (The fully qualified form never does method resolution, and in fact if there's an as SomeTrait involved or for <_>::method, it doesn't consider inherent methods at all.)


  1. with no inference variables ↩︎

  2. slash trait impl, and thus associated types too ↩︎

Oh, I see

<T as Deref>::Target

This is the point key. The dereference to a type needs we explicitly impl Deref or DerefMut for that type. For example

struct Foo<T>;
impl Deref for Foo<i32>{
  type Target = i32;
  fn deref(&self)->&i32{
      todo!();
  }
}

If we assume we had a type Foo<char> to be dereferenced to form another type in order to resolve the method call, we would get nothing because <Foo<char> as Deref>::Target is wrong, is my understanding right?

Right. Rust Playground

The associated type Target in Deref trait tells you the type obtained by using *.

In the example, foo /* : Foo<char> */ .method() wouldn't consider methods with char or &char or &mut char receivers because Foo<char> doesn't implement Deref. And also not i32 and friends. The "recursively dereference" part of method resolution stops when it can't dereference any more.

You can check by seeing if *thing errors with "type Thing cannot be dereferenced".

I don't think DerefMut actually matters for method resolution, though the lack of an implementation may result in an error if a &mut receiver is found. (I'll check this and update this post shortly.)

Edit: Yep.

  • String: Deref<Target = str> tells you that the function Deref::deref(&String) -> &str exists
  • &T: Deref<Target = T> tells you that the function Deref::deref(&&T) -> &T exists, i.e.
    • &String: Deref<Target = String> with Deref::deref(&&String) -> &String

We know s: &String, and that's exactly the argument in the deref function of String: Deref<Target = str>, so it's called.


For s: String

    let s = String::new();
    let ss: &str = s.deref();

You can never have Deref::deref(String) -> ..., because the signature needs &.
Accroding to the method call rules, the second candidate is Deref::deref(&String) -> ..., and we have it, then ss: &str too !

Method call rules tell you ...

for a type T, the candidate types are

  • T
  • &T
  • &mut T
  • *T (i.e. <T as Deref>::Target)
  • &*T
  • &mut *T
  • coercion to U
  • &U
  • &mut U

For this case

let s = String::new();
let ss: &str = s.deref();

IIUC, the original receiver expression type is String, so add it to the candidate list, we get

String

Then we try to dereference String to get more possible types. <String as Deref>::Target, this results str, the list will be

String
str

<str as Deref>::Target will get noting. For simple, we do not mention unsized coercion. Then, for each type, we add & and & mut for that type, the list will be

String
&String
& mut String
str
&str
& mut str

Then the visible trait is

impl<T: ?Sized> const Deref for &T {
    type Target = T;

    #[rustc_diagnostic_item = "noop_method_deref"]
    fn deref(&self) -> &T {
        *self
    }
}

// and

impl ops::Deref for String {
    type Target = str;

    #[inline]
    fn deref(&self) -> &str {
        unsafe { str::from_utf8_unchecked(&self.vec) }
    }
}

In the first trait, it receives &&T, which excludes String, &String, & mut String, str, &str, and & mut str, nothing remains will match it.

For the second trait, it receives &String, the second one in the list of candidate receiver types matches it. So, the second is the unique candidate for s.deref();. This is the whole analysis for this question.

However, how does s coerced to &String in order to call <String as Deref>::deref(&String)? I didn't find the relevant document.

Yes, I've modified my answer :slight_smile:

It's not coerced. It's the process of finding the receiver type.
My suggestion is to find by methods/functions, instead of by the type being implemented.

It's not coerced. It's the process of finding the receiver type.

It seems to be called auto-referencing, however, I don't find the document talking about this part.

You already found it:

Or from the book Method Syntax - The Rust Programming Language

When you call a method with object.something() , Rust automatically adds in & , &mut , or * so object matches the signature of the method.

2 Likes

(It's implicit in the algorithm but also explicitly noted.)

When looking up a method call, the receiver may be automatically dereferenced or borrowed in order to call a method.

Thanks. That means the types listed in the candidate receiver types are also the destination types the current receiver can be converted to. If a candidate method is chosen by the resolution, the receiver will be converted to the type the method receives, if necessary, right?