Hi, everyone.
I would like to introduce my MVP - YArNN
. I was inspired by leaf
and actually started development as a PR to juice
, but decided make a separate MVP project first and maybe later some part of it will be merged into juice
and coaster
.
What was the goals?:
- study the DNN and form my progress in some readable code
- make it as much useful as possible, and it is hard because of existence of
Tensorflow
:). Here are the possible usage scenarios ofyarnn
:- alternative to
Tensorflow.js
, but in WASM - it should be easily compiled to embedded devices (such as
stm32f4
with DRAM ork210
), because it uses alloc only for allocating tensors - compile with musl for minimal binary (e.g. for amazon lambda)
- pure rust library with no unsafe code
- alternative to
- support
OpenCL
(because of lack proper official support fromTensorflow
) - support
CUDA
- support popular models like
ResNet
What we have right now?:
- Layers:
Linear
,ReLu
,Sigmoid
,Conv2d
,MaxPool2d
- Optimizers:
Sgd
,Adam
andRMSProp
- Losses:
MSE
,CrossEntropy
- Backends:
Native
,NativeBlas
(not finished) - Some working examples for MNIST with convolutional and linear models
What will have in near future?:
-
MNIST
WASM example (currently working on) - Conv net on
stm32f4disco
board demo -
GEMM
based convolution option (should be much faster) -
VGG16
model demo -
NativeBlas
backend - Store/Load model weights
Here is the example of model definition:
use yarnn::model;
use yarnn::layer::*;
use yarnn::layers::*;
model! {
MnistConvModel (h: u32, w: u32, c: u32) {
input_shape: (c, h, w),
layers: {
Conv2d<N, B, O> {
filters: 8
},
ReLu<N, B>,
MaxPool2d<N, B> {
pool: (2, 2)
},
Conv2d<N, B, O> {
filters: 8
},
ReLu<N, B>,
MaxPool2d<N, B> {
pool: (2, 2)
},
Flatten<N, B>,
Linear<N, B, O> {
units: 10
},
Sigmoid<N, B>
}
}
}
Looks little ugly, but hey it is just a MVP with some quick decl_macro
solution ( proc_macro
variant should looks much better )
and of course you can do it in a builder way:
...
layers.add_layer::<Linear<_, _, _>>(LinearConfig {
units: 128
})
...
Happy studying, hacking and may be contributing