I just encountered a funny edge case with boxed trait objects and automatic deref coersions. Basically, a &Box<Trait> doesn't coerce to &Trait, even when you try to use the trick of dereferencing and immediately borrowing &*thing.
Does anyone know why this doesn't currently work?
pub trait Trait {}
fn uses_trait(t: &Trait) {}
struct Foo;
impl Trait for Foo {}
fn main() {
let trait_objects: Vec<Box<Trait>> = vec![Box::new(Foo)];
uses_trait(&trait_objects[0]); // Error: the trait `Trait` is not implemented for `std::boxed::Box<Trait>`
for t in &trait_objects {
uses_trait(&*t); // the deref-reref pattern doesn't work here either
}
}
I suspect that the unsized coercion from &Box<Trait> to &Trait (which would be possible if Box<Trait>: Trait) took precedence over the deref coercion from &Box<Trait> to &Trait (which is possible because Box<Trait>: Deref<Target=Trait>) here. Since the resolution of the trait bound Box<Trait>: Trait, which is required by the unsized coercion, is delayed, the deref coercion isn't even considered, perhaps.
I had a feeling &**t would work, but I feel like the whole point of deref coersion was to make sure we don't get the C problem of "adding more *'s until it compiles".
Is the coercion order specified in an RFC anywhere? It's an edge case of an edge case, but I feel like coercion order should be defined somewhere other than in the guts of the compiler.