Consider this case
use std::ops::Deref;
fn main(){
let s:&String = &String::new();
let ss = s.deref(); // #1
}
It appears to me, the candidates at #1
have at least two. For simplification to discuss, I just list two.
According to Method-call expressions, the receiver's type is &String
, So, the list of candidate receiver types at least consists of {String, & mut String, &String, &&String, & mut &String, }
.
The standard has at least two viable traits
impl ops::Deref for String {
type Target = str;
#[inline]
fn deref(&self) -> &str {
unsafe { str::from_utf8_unchecked(&self.vec) }
}
}
impl<T: ?Sized> const Deref for &T {
type Target = T;
#[rustc_diagnostic_item = "noop_method_deref"]
fn deref(&self) -> &T {
*self
}
}
&String
exactly matches the first while &&String
exactly matches the second. However, the candidate chosen at #1
by the compiler is the first one. How does the compiler choose the second one as the best candidate?
Incidentally,
Obtain these by repeatedly dereferencing the receiver expression's type
This rule is a bit circular definition, considering the type of receiver we are looking up the deref
candidate for the type, such as the example in the rust reference
For instance, if the receiver has type
Box<[i32;2]>
, ...,[i32; 2]
(by dereferencing),
The dereference of Box<[i32;2]>
to get [i32; 2]
itself depends on which candidate would be chosen for dereferencing Box<[i32;2]>
, How is the compiler sure to decide this candidate
impl<T: ?Sized, A: Allocator> const Deref for Box<T, A> {
type Target = T;
fn deref(&self) -> &T {
&**self
}
}
as the best candidate for the result of obtaining receiver-type candidates when we are looking up the candidate methods for Box<[i32;2]>
?