I have, for example, this trait:
pub trait DynamicSize {
fn size(&self) -> usize;
}
It has an implementation for Vec<T>
, which simply sums up the dynamic sizes:
impl<T: DynamicSize> DynamicSize for Vec<T> {
fn size(&self) -> usize {
self.iter().map(DynamicSize::size).sum()
}
}
Now, I create some struct implementing From
:
struct Test<T>(Vec<T>);
impl<T: DynamicSize> From<T> for Test<T> {
fn from(data: T) -> Self {
Self(vec![data])
}
}
But, crucially, I also have an implementation for specifically Vec
:
impl<T: DynamicSize> From<Vec<T>> for Test<T> {
fn from(data: Vec<T>) -> Self {
Self(data)
}
}
(disregard the fact the trait is not actually used in the functions, I simplified the code)
Given a vec![1, 2, 3]
, it can either
- Interpret Vec as implementing
DynamicSize
and call the first implementation (resulting withTest([[1, 2, 3]])
) - Interpret Vec as matching the second implementation (resulting with
Test([1, 2, 3])
)
The question is how does the compiler know which trait implementation to choose?