We use rust and python do some awsome research about chronic
AFAIK machine learning has never been a focus of Rust. Currently, for better or worse, if python doesn't suffice, the most viable alternative seems Swift due to its automatic differentiation language feature and growing library support. Most or all of this work is being done by Google.
There are Rust bindings to TensorFlow:
better or worse, if python doesn't suffice, the most viable
alternative seems Swift due to its automatic differentiation language
feature and growing library support. Most or all of this work is being
done by Google.
Is Google working on Swift? Are you sure?
Recently I released Primeclue. Not really deep learning, but integrated with GUI and all this stuff, so essentially ready to use. And it's written in Rust.
IIRC only the Automatic Differentiation feature, not the language at large, and the rest is libraries.
I am quite surprised to see Swift mentioned for machine learning.
As far as I know the workflow for machine learning is prototype in Python or MATLAB. Once you know exactly what works on your application , implement that in faster language (C , C++, Rust).
If I would start something on that area , I would check what is on Julia programming language though, which aims at being used both in prototyping and final implementation, has the needed mathematical syntax, is gaining traction, and machine learning is one of the focus areas
At the beginning it was very exciting to see the idea of tensorflow for swift as Chris Lattner was involved and they created a fork of swift. So having the compiler being coded for deep learing and the support from google looked very nice. But now Chris Lattner is not longer working in google and it seems that the hype is over.
tract is actually looking pretty cool. It allows you to execute TensorFlow models in Rust, but you build and export them from any other language such as Python. Might not be what you are looking for, but a valuable trick to be able to pull of if you ever need it.
Just wanted to make a plug for the tch-rs crate (disclaimer: I'm one of the main contributor to this). It provides bindings for the C++ PyTorch library, and so lets you run tensor computations on GPU and makes it easy to do automatic differentiation.
The api is very close to the Python PyTorch api so porting models is usually fairly easy. A bunch of examples are available (in the
examples directory of the repo), among which:
- A minimal GPT implementation based on MinGPT.
- Various computer vision models, inception, resnet, densenet, efficientnet, ...
- Relativistic GANs.
- Yolo v3 (object detection).
Pre-trained weights are available for some of the models, mostly based on the original python weights.
This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.