A ndarray extension library for matrix operations by CUDA

This library leverages the powerful cuBLAS library to perform efficient matrix multiplications on compatible Nvidia GPUs.

This supports matrix dot multiplication, matrix multiplication by scalar, matrix inversion, matrix inversion, and subsequent operators such as SVD decomposition and matrix eigenvector eigenvalue solution.

This lib defines that the run macro can automatically copy the data in the host memory to the GPU memory and perform calculations. CUDA can be used for calculations with very simple code.

Using run macro can simplify the code and write it like a mathematical expression. The following is an example of using run macro.

fn least_squares_method()
{
    let x = array![[1f32, 1f32], [1f32, 2f32], [1f32, 3f32], [1f32, 4f32]];
    let y = array![[6f32], [5f32], [7f32], [10f32]];
    let bate_hat = run!(x,y => {
        let x_t = x.t();
        x_t.dot(x).inv().dot(&x_t).dot(y)
    }).to_host();
    println!("{:?}",bate_hat);
}

The example code implements the least squares method using the code that is most similar to the mathematical expression.

This project url is :

Welcome everyone to use it, if you find it useful, please give me a star.

2 Likes