# Nalgebra - Error multiplying matrix by vector in generic size struct

Running into what seems like an odd issue when trying to multiply a square matrix A with X rows and X columns by a vector x with X rows, constructed from a struct with generic sizing. Just a basic A*x = b operation. The end goal is to have a struct type with a generic size X that can be declared when the struct is created, rather than use a static sizing.

The data struct and associated methods are as follows:

struct Foo<T, const X: usize>{
A: na::base::SMatrix<T, X, X>,
x: na::base::SVector<T, X>
}

impl<T, const X: usize> Foo<T, X> {
fn b(&self) -> na::base::SVector<T, X> {
self.A*self.x
}
}

However, the compiler throws an error when trying to multiply the two, that A and x are incompatible:

error[E0369]: cannot multiply `Matrix<T, Const<X>, Const<X>, ArrayStorage<T, X, X>>` by `Matrix<T, Const<X>, Const<1_usize>, ArrayStorage<T, X, 1_usize>>`
--> src\main.rs:989:15
|
989 |         self.A*self.x
|         ------^------ Matrix<T, Const<X>, Const<1_usize>, ArrayStorage<T, X, 1_usize>>
|         |
|         Matrix<T, Const<X>, Const<X>, ArrayStorage<T, X, X>>

What am I doing wrong here? I'm still learning Rust's generic type system, is there something in the impl declaration I should change? Would appreciate any feedback on implementation.

Whew the Matrix docs page is huge huh?

The normal way to deal with these situations is to constrain T so you can perform all the operations you need without being overly specific about how you're going to use it. Unfortunately doing that here is both verbose and fragile, since you have to add a dependency on num-traits and make sure you keep the version matched to what nalgebra uses.

Heres what that looks like.

Playground

use nalgebra::base::{SMatrix, SVector};
use std::fmt::Debug;

struct Foo<T, const X: usize> {
A: SMatrix<T, X, X>,
x: SVector<T, X>,
}

impl<T, const X: usize> Foo<T, X>
where
T: std::ops::MulAssign
+ PartialEq
+ Debug
+ Clone
+ num_traits::identities::One
+ num_traits::Zero
+ 'static,
{
fn b(&self) -> SVector<T, X> {
&self.A * &self.x
}
}

fn main() {
let foo = Foo {
A: SMatrix::<f32, 2, 2>::new(1., 2., 3., 4.),
x: SVector::<f32, 2>::new(5., 6.),
};

println!("{:?}", foo.b());
}

The other option is to explicitly ask for conformance to the trait that will let you do what you want, which doesn't involve the num-traits dependency, but does require a Higher Ranked Trait Bound.

Playground

use nalgebra::base::{SMatrix, SVector};
use std::ops::Mul;

struct Foo<T, const X: usize> {
A: SMatrix<T, X, X>,
x: SVector<T, X>,
}

impl<T, const X: usize> Foo<T, X>
where
for<'a> &'a SMatrix<T, X, X>: Mul<&'a SVector<T, X>, Output = SVector<T, X>>,
{
fn b(&self) -> SVector<T, X> {
&self.A * &self.x
}
}

fn main() {
let foo = Foo {
A: SMatrix::<f32, 2, 2>::new(1., 2., 3., 4.),
x: SVector::<f32, 2>::new(5., 6.),
};

println!("{:?}", foo.b());
}

That first solution worked really well! Thank you so much, I really appreciate your help on this. And yeah, that nalgebra docs page is immense.

As an additional note for anyone who stumbles across this, the vector can also be passed as an input like so:

fn b(&self, x: &na::base::SVector<T, X>) -> na::base::SVector<T, X> {
&self.A*x
}

which I have found to work well so far.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.