I have recently developed two proof of concept projects:
I notice that someone is also looking forward for neural network library in rust, so I make the attempt.
However, I only have time to develop one of them (maybe none of them). So I would like to hear your voice. I am looking forward someone to join in this project. Contributors and reviewers are all welcome.
The pros and cons are shown below:
- Offers well-designed C API, which is easy to bind.
- Designing module supports
hybridize is possible but not in a ergonomic way. Maybe only one of them (
symbolic) can be supported.
imperative only. Simple and fast.
- C API is generated from C++, which is difficult to use.
P.S. I personally prefer pytorch binding, because tvm will support training in the future.
For bindings, Apache MXNet is preferred and is easier actually, though for developement Pytorch has a much wider community in research but for Rust (maybe?). Also full converge of either API is a lot of plumbing work if you enjoy
I think, I'd be good to have a Rust option in general, but DL paradigm is shifting and binding won't do the jobs.
Since, I happen to have some experience with TVM; training will be more at Relay level than the frontend frameworks.
Yeah, MXNet is easier to bind, but the
hybridize can only be implemented on compile time, which is, well, a really tough task.
TVM is awesome! Though I don't know why it going to support training since it is a frontend compiler. But if it dose support training, there may be no need to bind MXNet any more, we can just use those operators to build a new dl framework for rust.