YArNN - Yet Another rust Neural Network framework

Hi, everyone.

I would like to introduce my MVP - YArNN. I was inspired by leaf and actually started development as a PR to juice, but decided make a separate MVP project first and maybe later some part of it will be merged into juice and coaster.

What was the goals?:

  1. study the DNN and form my progress in some readable code
  2. make it as much useful as possible, and it is hard because of existence of Tensorflow :). Here are the possible usage scenarios of yarnn:
    • alternative to Tensorflow.js, but in WASM
    • it should be easily compiled to embedded devices (such as stm32f4 with DRAM or k210), because it uses alloc only for allocating tensors
    • compile with musl for minimal binary (e.g. for amazon lambda)
    • pure rust library with no unsafe code
  3. support OpenCL (because of lack proper official support from Tensorflow)
  4. support CUDA
  5. support popular models like ResNet

What we have right now?:

  1. Layers: Linear, ReLu, Sigmoid, Conv2d, MaxPool2d
  2. Optimizers: Sgd, Adam and RMSProp
  3. Losses: MSE, CrossEntropy
  4. Backends: Native, NativeBlas (not finished)
  5. Some working examples for MNIST with convolutional and linear models

What will have in near future?:

  1. MNIST WASM example (currently working on)
  2. Conv net on stm32f4disco board demo
  3. GEMM based convolution option (should be much faster)
  4. VGG16 model demo
  5. NativeBlas backend
  6. Store/Load model weights

Here is the example of model definition:

use yarnn::model;
use yarnn::layer::*;
use yarnn::layers::*;

model! {
    MnistConvModel (h: u32, w: u32, c: u32) {
        input_shape: (c, h, w),
        layers: {
            Conv2d<N, B, O> {
                filters: 8
            },
            ReLu<N, B>,
            MaxPool2d<N, B> {
                pool: (2, 2)
            },

            Conv2d<N, B, O> {
                filters: 8
            },
            ReLu<N, B>,
            MaxPool2d<N, B> {
                pool: (2, 2)
            },

            Flatten<N, B>,
            Linear<N, B, O> {
                units: 10
            },

            Sigmoid<N, B>
        }
    }
}

Looks little ugly, but hey it is just a MVP with some quick decl_macro solution ( proc_macro variant should looks much better :slight_smile: )

and of course you can do it in a builder way:

...

layers.add_layer::<Linear<_, _, _>>(LinearConfig {
    units: 128

})

...

Happy studying, hacking and may be contributing :wink:

3 Likes

If there was a way to load PyTorch trained model in Rust/YArNN and execute it in wasm, it would be awesome.

Right now no way to store/load models, but agree it would be great, to load/store into PyTorch format.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.