Specifying Types for Generics

Hello, new rust user here. I'm most recently coming from Julia which allows me to dispatch methods based on types or abstract types, but am struggling to make the connection to Rust and trait bounds for generics. Below is example code where I have used the num crate to specify the Float trait, but I get an error because I am using {float} instead of T as arguments to this function (because of the 0.0 specified in the arguments). What am I missing here? I have much more complicated things to do with generics later and I'm trying to take the time now to really understand them.

fn skew<T>(x: &Vector3<T>) -> Matrix3<T>
where
    T: Float,
{
    Matrix3::new(0.0, x[2], -x[1], -x[2], 0.0, x[0], x[1], -x[0], 0.0)
}

arguments to this function are incorrect
expected type parameter `T`
             found type `{float}`
expected type parameter `T`
             found type `{float}`
expected type parameter `T`
             found type `{float}`

Isn't there something like IntoFloat trait? A primitive float number is not a Float instance type.
Maybe Matrix3::<Float>::new(...) works, depends on Matrix3.

I'm not finding anything on IntoFloat. Regarding Matrix3::<Float>, looks like Float would need to made into an enum.

fn skew<T>(x: &Vector3<T>) -> Matrix3<T>
where
    T: Float,
{
    Matrix3::<Float>::new(
        0.0, x[2], -x[1], -x[2], 0.0, x[0], x[1], -x[0], 0.0
    )
}

the trait `Float` cannot be made into an object
the following types implement the trait, consider defining an enum where each variant holds one of these types, implementing `Float` for this new enum and using it instead:
  f64
  f32

Also, I tried specifically casting to T but it appears "as" does not work for generics

fn skew<T>(x: &Vector3<T>) -> Matrix3<T>
where
    T: Float,
{
    Matrix3::new(
        0.0 as T, x[2], -x[1], -x[2], 0.0 as T, x[0], x[1], -x[0], 0.0 as T,
    )
}

non-primitive cast: `f64` as `T`
an `as` expression can only be used to convert between primitive types or to coerce to a specific trait object

because the only information about the type T is the bound Float, you can only use whatever functions the Float trait provided, nothing else (a literal 0.0 value or as cast). in this particular case, Float has Num as a super trait, which in turn has Zero as a super trait, you can just use Zero::zero() to create a T with the zero value.

    // I prefer this syntax
    Matrix3::new(0.0, x[2], -x[1], -x[2], 0.0, x[0], x[1], -x[0], T::zero())
    // alternative
    Matrix3::new(0.0, x[2], -x[1], -x[2], 0.0, x[0], x[1], -x[0], Zero::zero())
    // full syntax
    Matrix3::new(0.0, x[2], -x[1], -x[2], 0.0, x[0], x[1], -x[0], <T as Zero>::zero())
4 Likes

forget to mention, if you want different value than the "zero" value, you can use the NumCast() super trait instead of Zero:

fn answer<T: Float>() -> T {
    <T as NumCast>::from(42).unwrap()
}
3 Likes

Oof okay thanks. That worked for me and fixed this error, but I think my next errors are somewhat related. I have to do some scalar and vector addition and multiplication with T, Vector3<T> and Matrix3<T> where T: num::Float + num::Zero + nalgebra::Scalar but it seems like none of those operations are defined for generic T. Do I need to impl methods for every combination of operations I do on T, even though T is always just a float?

cannot add `T` to `Matrix<T, Const<3>, Const<3>, ArrayStorage<T, 3, 3>>`
cannot multiply `Matrix<T, Const<3>, Const<3>, ArrayStorage<T, 3, 3>>` by `Matrix<T, Const<3>, Const<3>, ArrayStorage<T, 3, 3>>`

probably not.

first of all, I doubt that the operations you want to do is not already defined by nalgebra. see the list here:

https://docs.rs/nalgebra/latest/nalgebra/base/struct.Matrix.html#computer-graphics-utilities-for-transformations

second, even if the operation you want to do didn't come with nalgebra yet, there's a good chance it can be implemented in terms of the Float trait, so you only need to implement it once generically.

please provide details or examples of the operation to help us better understand your problem.

As an example from my code, if I specify the type as f64, I get no compile errors, and + and * are implemented for f64, Vector3<f64>, and Matrix3<f64>.

pub struct Inertia {
    mass: f64,
    center_of_mass: Vector3<f64>,
    inertia: Matrix3<f64>,
    value: Matrix6<f64>,
}


impl<T> Inertia<T>
where
    T: Float + Scalar + fmt::Debug,
{
    fn new(mass: T, center_of_mass: Vector3<T>, inertia: Matrix3<T>) -> Self {
        let c = skew(&center_of_mass);
        let ct = c.transpose();        
        let b1 = inertia + (mass * ( c * ct) );
        let b2 = mass * ct;
        let b3 = mass * c;
        let b4 = Matrix3::from_diagonal_element(mass);
        let value = concat_block_2x2(&b1, &b2, &b3, &b4);
        Self {
            mass,
            center_of_mass,
            inertia,
            value: value,
        }
    }
}

However, when I switch to a generic type, which for now I just want to restrict to f32 or f64 as an exercise (though in the future it could be custom types for uncertainty or derivative propagation), then it no longer finds + and * as implemented operations.

pub struct Inertia<T> {
    mass: T,
    center_of_mass: Vector3<T>,
    inertia: Matrix3<T>,
    value: Matrix6<T>,
}

impl<T> Inertia<T>
where
    T: Float + Scalar + fmt::Debug,
{
    fn new(mass: T, center_of_mass: Vector3<T>, inertia: Matrix3<T>) -> Self {
        let c = skew(&center_of_mass);
        let ct = c.transpose();
        let b1 = inertia + (mass * c * ct);
        let b2 = mass * ct;
        let b3 = mass * c;
        let b4 = Matrix3::from_diagonal_element(mass);
        let value = concat_block_2x2(&b1, &b2, &b3, &b4);
        Self {
            mass,
            center_of_mass,
            inertia,
            value: value,
        }
    }
}

// on the + in inertia + (mass * c * ct);
cannot add `T` to `Matrix<T, Const<3>, Const<3>, ArrayStorage<T, 3, 3>>`rustcClick for full compiler diagnostic
spatial.rs(154, 18): Matrix<T, Const<3>, Const<3>, ArrayStorage<T, 3, 3>>
spatial.rs(154, 28): T
spatial.rs(149, 13): consider extending the `where` clause, but there might be an alternative better way to express this requirement: `, Matrix<T, Const<3>, Const<3>, ArrayStorage<T, 3, 3>>: Add<T>`

// on the c in inertia + (mass * c * ct);
mismatched types
expected type parameter `T`
           found struct `Matrix<T, Const<3>, Const<3>, ArrayStorage<T, 3, 3>>`rustcClick for full compiler diagnostic
spatial.rs(147, 6): expected this type parameter
spatial.rs(154, 29): expected because this is `T`
let c: Matrix<T, Const<3>, Const<3>, ArrayStorage<T, 3, 3>>

// on the call to concat_block_2x2
arguments to this function are incorrect
expected reference `&Matrix<T, Const<3>, Const<3>, ArrayStorage<T, 3, 3>>`
   found reference `&T`
expected reference `&Matrix<T, Const<3>, Const<3>, ArrayStorage<T, 3, 3>>`
   found reference `&T`rustcClick for full compiler diagnostic
spatial.rs(150, 6): found this type parameter
spatial.rs(150, 6): found this type parameter
spatial.rs(161, 43): expected `&Matrix<T, Const<3>, Const<3>, ...>`, found `&T`
spatial.rs(161, 48): expected `&Matrix<T, Const<3>, Const<3>, ...>`, found `&T`
spatial.rs(97, 4): function defined here

In the end I was able to work through all the trait bounds and compile errors to achieve something that worked.

pub struct Inertia<T> {
    mass: T,
    center_of_mass: Vector3<T>,
    inertia: Matrix3<T>,
    value: Matrix6<T>,
}

impl<T> Inertia<T>
where
    T: Float + Scalar + fmt::Debug + std::ops::AddAssign + std::ops::MulAssign,
{
    pub fn new(mass: T, center_of_mass: Vector3<T>, inertia: Matrix3<T>) -> Self {
        let c = skew(&center_of_mass);
        let ct = c.transpose();
        let m = Matrix3::from_element(mass);
        let b1 = inertia + m.component_mul(&(c * ct));
        let b2 = m.component_mul(&ct);
        let b3 = m.component_mul(&c);
        let b4 = Matrix3::from_diagonal_element(mass);
        let value = concat_block_2x2(&b1, &b2, &b3, &b4);
        Self {
            mass,
            center_of_mass,
            inertia,
            value: value,
        }
    }
}

Forgive my ignorance here. I'm an aerospace engineer rather than a comp sci guy who has spent all of my career in either Matlab or other scripting languages. Rust would be my first big boy language, and I think I'm having a little bit of sticker shock that I can't figure out how to do simple operations knowing the type can only be of float without frankensteining the code. Is my final implementation the expected solution, or am I just doing something obviously wrong? I suspect it's the latter.

well, that's pretty much all of it. the num::Float is both over-constraining and under-constraining at the same time. for example, num::Float requires many math functions like sin, cos, ln etc to be implemented, which is not required by the linear algebra operations; on the other hand, the nalgebra implementation makes use of AddAssign and MulAssign when it makes sense to avoid potential allocation (because the implementation is generic, the matrix dimensions could potentially be huge), but num::Float doesn't have AddAssign or MulAssign as super traits.

the precise bounds for matrix arithmetic operations depends on the operation. e.g. addition + is defined here:

and multiplication '*' is defined here:

to some extent, I think the nalgebra crate is trying to be too generic or "flexible" ( flexible in the sense that the types in the library usually have quite a few parameters and configurations). this is usually fine for most applications, as they usually only use fixed concrete types, (e.g. Matrix4<f32>, Vector3<f32>, and Vector4<f32> for games, although I usually prefer simpler libraries like cgmath or glm for such use cases), but if you want to use it in a generic way, you'll have to repeat the same trait bounds as nalgebra itself, which is very tedious and complicated indeed.

there's nothing wrong about what you are doing. but in some cases, there are tricks that might help reduce the annotation noises. a common trick I use a lot is to specify the trait bounds for the actual type I use, instead of the the parameter T.

for example, suppose I want to write a generic function that multiplies two 4x4 matrices in reverse order, instead of this:

fn reverse_multiply<T>(x: Matrix4<T>, y: Matrix4<T>) -> Matrix4<T>
where
	T: Scalar + ClosedAdd + ClosedMul + Zero + One + MulAssign,
{
	y * x
}

I can write this instead:

fn reverse_multiply<T>(x: Matrix4<T>, y: Matrix4<T>) -> Matrix4<T>
where
	Matrix4<T>: Mul<Matrix4<T>, Output = Matrix4<T>>,
{
	y * x
}

if the trait is very long and used many times, I can define a sub trait with a blanket implementation (or use the unstable #![feature(trait_alias)]) to reduce some typing, for instance, the trait bounds Matrix4<T>: Mul<Matrix4<T>, Output=Matrix4<T>> in the above snippet can be shortened using a sub trait.

trait MyMul: Sized + Mul<Self, Output = Self> {}
impl<T> MyMul for T where T: Mul<T, Output = T> { }
fn reverse_multiply<T>(x: Matrix4<T>, y: Matrix4<T>) -> Matrix4<T>
where
	Matrix4<T>: MyMul
{
	y * x
}

in fact, this particular example can use nalgebra::ClosedMul for this purpose. see

forgot to mention, this only works for operations defined by traits, it won't work for inherent methods of the type, obviously, in which case you must repeat all the trait bounds as required by the implementation themselves. you may still define sub traits as shorthands.

Thanks for all the help @nerditation , I've learned a lot as I worked through your response. While I have analysis simulators in other languages that give me the generic functionality I need, I am learning Rust for my next task, a real-time hardware simulator. For that, perhaps it's not as important to have generics and I can just specify primitives to continue my Rust journey, with the hope that it all really clicks along the way.