Hi, for context, I'm currently working some interpolation code for images and ended up with a trait and a function defined similar to what follows (playground link):
trait CanInterpolate<V>
where
V: Add<Output = V>,
f32: Mul<V, Output = V>,
{
fn to_vector(self) -> V;
fn from_vector(v: V) -> Self;
}
fn interpolate<V, T>(a: f32, b: f32, c: T, d: T, e: T) -> T
where
V: Add<Output = V>,
f32: Mul<V, Output = V>,
T: CanInterpolate<V>,
{
let c_vec = c.to_vector();
let d_vec = d.to_vector();
let e_vec = e.to_vector();
// T::from_vector((a+b) * c_vec + (a-b) * d_vec + (a*b) * e_vec) // not working
T::from_vector((a+b) * c_vec + (a-b) * d_vec + Mul::<f32>::mul(a, b) * e_vec)
}
One line in the code is commented, because writing (a*b)
confuses the compiler and forces it to believe that (a*b)
must be of type V
whereas it's totally fine as a regular f32
. I thought the priority of operation with the parenthesis would make that clear. Do you know what is wrong with that line to make the compiler fail. Is there an issue with type inference maybe?