Do you need to convert nalgebra matrices for wgpu?

I'm trying to see if it's like cgmath where you need to create a matrix for conversion or not.

The only thing wgpu needs to fill a buffer (like a matrix) is a byte slice. Which you can get by casting the f32 slice with bytemuck:

let mat4 = Matrix4::<f32>::identity();
let bytes: &[u8] = bytemuck::cast_slice(mat4.as_slice());
1 Like

So none of this?

#[rustfmt::skip]
pub const OPENGL_TO_WGPU_MATRIX: cgmath::Matrix4<f32> = cgmath::Matrix4::new(
    1.0, 0.0, 0.0, 0.0,
    0.0, 1.0, 0.0, 0.0,
    0.0, 0.0, 0.5, 0.5,
    0.0, 0.0, 0.0, 1.0,
);

This matrix is there because of the difference in coordinate systems for NDC between WGPU and OpenGL if I recall correctly. Thus, if you're using OpenGL projection matrices in WGPU you'll always need this matrix -- regardless of which crate you use (cgmath, nalgebra, glam, etc.).

The current standards in the rust graphics communities are either nalgebra if you're using the rest of the nalgebra ecosystem (rapier, etc.), or glam if you're not. Of course you can switch between them but then you need to do conversions and it's annoying.

cgmath is rarely used nowadays.


As an example of the projection matrix thing I mentioned -- here is glam's OpenGL projection matrix, whereas here is a projection matrix suitable for DirectX/Vulkan/Metal/WGPU.