Is Rust good for deep learning and artificial intelligence?

Is Rust good for deep learning and artificial intelligence, just like Python?


No, it isn't. Python has better libraries for deep learning. (AI is a field wide enough that maybe Rust is better than Python for some problems, say, tree search. But deep learning is not one of them.)


Can you some how port these libraries to Rust?


There are attempts, but for now there is no solution which is complete. Tensorflow port seems to be under development, but its not even close to Python library. There are also Rust specific libraries, but they are also like far behind Tensorflow capabilities.

1 Like

One question, is there a way to use a bit of python in deep learning and then use Rust for the most part (if that makes anysense)? Cause I hate python's syntax and I would rather use Rust?


Rust TF implementation seems to already have loading saved model, like in example: rust/ at master · tensorflow/rust · GitHub. I don't know its exact status, but if it works for every model, you may simply create and train your network with python, and then export it and load on Rust side. Most DL Python frameworks allows to extract TF model (they are mostly just overlays over TF).

1 Like

As a language Rust would fit perfectly, but as others have said the crates are not there yet.


There exists an incredible amount of c++ supporting python apis (GitHub - pytorch/pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration, GitHub - tensorflow/tensorflow: An Open Source Machine Learning Framework for Everyone). In many cases this represents a phenomenal amount of person-years. In the same way that rust is a real-world option to other c++ domains, so it goes for machine learning.

Pretty much all real-world ML/AI projects consist of two paths:

  1. low level math (automatic-differentiation, stats/probability, matrix algebra) and computation libraries (and now an especial focus on compilers..., GitHub - FluxML/Zygote.jl: 21st century AD).

  2. High level APIs: python, r, julia, javascript (or a dynamically typed language with first class REPL)... to support ad-hoc data exploration and one-off scripts.

A rust entry into the ML world would likely be much more on the low-level stuff, providing, for example, a python API.

It's really a matter of community for rust. It's worth noting that google's own evaluation for the future of tensorflow did include Rust as a strong possibility (but since chris lattner, of llvm and swift, was lead on the new team it was not really a surprise when they picked swift :wink: ). Here is a relevant quote from the released rationale (

Rust: We believe that Rust supports all the ingredients necessary to implement the techniques in this paper: it has a strong static side, and its traits system supports zero-cost abstractions which can be provably eliminated by the compiler. It has a great pointer aliasing model, a suitable mid-level IR, a vibrant and engaging community, and a great open language evolution process. [...] We next excluded C++ and Rust due to usability concerns...

Rust, technically, is a great choice for building ML/AI software... but it all comes down to ecosystem and community.

Here is one of the best examples I've come across for rust and ml: GitHub - maciejkula/sbr-rs: Deep recommender systems for Rust

IMO, with respect to rust and ai, it's not productive to focus on ad-hoc data exploration and repl-like experience. Instead, building fantastic data engineering infrastructure (think apache spark in rust, etc...) is really where I think rust will shine (and I happen to know that target [the retailer] is currently trying to build out some of their new data engineering infrastructure with rust).

Also, low level mathematical libraries too. One problem with this though is that you kinda want professional mathematicians involved in the community... and the number of mathematicians that can code is really small, and of that number most of them prefer (or were trained on) matlab, python, haskell... picking up rust is a tall order.

However, I think Rust has a great future in low-level mathematical libraries (and due to a lot of work to support graphics and gaming, good linear algebra libs exist), and that is why I'm learning it. For example, I'm currently translating code examples from this book (Grokking Deep Learning) into rust, using nalgebra (and possibly ndarray).

Lastly, if you are looking to really get moving on something, pytorch has committed some time to working on an exported c++ model (Loading a TorchScript Model in C++ — PyTorch Tutorials 2.1.1+cu121 documentation). You could train in pytorch-python, export to c++, and then use rust around that. Yikes (a lot of work across 3 languages)! But they do call out rust explicitly....

For production scenarios, C++ is very often the language of choice, even if only to bind it into another language like Java, Rust or Go.

Reality, very few companies have the capital to support research and development on new tech that is this involved (e.g., teams of 10's of PhDs in comp-sci and maths building foundational software). And those companies, for whatever reason, have converged on c++ underlying python APIs. This will change, and is changing with swfit-for-tensorflow; as well, Julia's FluxML (partly supported by the small company JuliaComputing) is also a bright spot in an otherwise dismally heterogeneous landscape.

There are no other companies at the scale of IBM, Google, Facebook, Salesforce, Uber, and Microsoft [major companies at the forefront of much of the R&D for ai/ml] that are investing in software, platforms, compilers, or mathematical libraries for Rust in ai/ml... at least not yet. And unfortunately, building foundational ai/ml software is not really something that can happen with a few people working on personal time (at least not within a reasonable time-frame).


I hope this comes to Rust one day.

I know right, I couldn't agree with you more. I want to get into deep learning, machine learning, AI etc. But I didn't want to use something like Python where it is much slower compared to Rust and its syntax is really bad IMO. I just don't like its indentation rules and the lack of semicolons that end lines. So I hope Rust can one day have everything for deep learning etc to the full extent.

Oh thats nice I will have a look into that.


I share this sentiment. But the reality is that python APIs are where all the latest developments come from. Majority (if not all) people doing real work on ai/ml are research scientists at big companies. Their concern is more with getting ideas into code, and not necessarily the code itself... i doubt they see much or, nor are really concerned with, deployment and/or productization.

I don't chose python, and i try not to get too annoyed that there really isn't another choice (Julia is probably the best alternative), instead focusing on what a more diverse programming landscape looks like for ml/ai and what that diversity can bring to expanding thinking about solving certain problems.

Various attempts at deep learning libs exist in Go, Haskell, Rust, F# ... and they all share one thing in common: not enough help, too much work, creators are simply overwhelmed.

I think for Rust the path to ml/ai and computational mathematics is through supporting existing community focus to push adoption and gain attention at large corporations currently doing R&D in ai/ml software. If Rust can get established in some areas with X corp, then it's easier to cross-pollinate teams.

I forgot to mention Amazon above, and they are likely one of the ml/ai companies to find rust adoption for this domain. Again though, it's likely to only be new projects that will also likely have a python API. lol

But also rust community actually needs to show an interest. I initially had high hopes that go would be become a viable alternative to python in the ai/ml landscape (specifically I had planned out a natural language processing framework)... but it was clear around 2016 that the community of go was not really the kind of people who did machine learning work... though there is still a small dedicated community it is nothing like what you see in python.


also FYI, once I've worked through the book examples in Grokking Deep Learning and my rust examples are done I'm going to post them as PR to his project, and I'll advertise them here too.

Lastly, if you are looking to get into the ml/ai/deep-learning area I highly recommend Andrew Trask's book.


I hope that happens in the near future.

1 Like

I'm more mostly involved with scientific computing / applied mathematics and would love to use more Rust libraries.

I don't think Rust as it is built is the perfect choice for the messy iterative process scientists go through. Scientific Computing mentions a basic REPL and Jupyter kernel, but quick interactions at the top level is not Rust strong suit.

However, the Python ecosystem today is not as much Python as {C/C++} libraries with a Python API. Rust could really be the excellent choice for a next generation of core tools in linear algebra (replacing LAPACK, BLAS, ARPACK), optimization solvers (most of them are not only in C, C++, they are also commercial and closed-source, especially for integer optimization).

Even though I'm not heavily involved in ML, the ability to differentiate functions through Automatic Differentiation is pretty crucial and has to be enabled at compiler-level. Two interesting links for those interested:
ML as a compiler problem in Julia
AD in Swift


I don’t know its exact status, but if it works for every model, you may simply create and train your network with python, and then export it and load on Rust side.

Yes, that works, and I have been doing this for a least a year. However, you can also train in Rust. Since it is not very convenient to define a graph directly in protobuf, it is best to use Python to build the Tensorflow graph. However, once you have set up the graph, you can serialize it and then load the graph in Rust. Training is then a matter of feeding the data through placeholders and calling the training op that you have defined in the graph.

I do this in various projects. For example, in my dependency parser, here is the Python code that defines the graph:

Here is the Rust code that loads the graph and performs training/prediction:

For me this is much more convenient than training in Python, since it minimizes the code in Python-land.


To answer the question of the topic starter, there are some solid foundational libraries, such as ndarray, petgraph, the Tensorflow binding, etc. But you have to be prepared to do quite some heavy lifting yourself. It is definitely not comparable to the Python or C++ ecosystems yet.

I have implemented a (neural) part-of-speech tagger, dependency parser, and an implementation for training word embeddings (akin to fastText [1]), but I had to implement most things from scratch. I also got quite an improvement over ndarray by writing custom linear algebra functions using SIMD intrinsics.



I'm a bit late to this topic, but I'd just like to emphasize what's already been said on the subject. In fact, a great share of it resembles what I said last year about Rust and its stance in data science.

  • When it comes to TensorFlow, the Python library will always be the most complete and reasonable choice for building the models. You may then serve the models through Rust using the bindings already mentioned (disclaimer: I contributed to tensorflow/rust with the saved model API). For the time being, this ought to be a reasonable path onwards. I once heard that there were some third-party initiatives to create high-level abstractions on top of TensorFlow, but cannot testify on their quality. It will also be pretty hard to keep up with the Python API.
  • Integration with existing frameworks is indeed an important concern. While it's a huge effort to create a new ML tool from scratch, we should be able to: (1) transfer them to a Rust ecosystem; (2) contribute to existing tools with Rust code.
  • Performance has already been mentioned, but let's not forget that even the most popular Python libraries for deep learning are either made in other close-to-the-metal languages internally or already take advantage of GPU processing and vectorization, making any potential overhead from the use of Python as the user-facing API close to negligible. Making graph-based computation fast and efficient is, as also mentioned around here, not as simple as changing the compiler. I find in Rust a greater value here for its type system, allowing us to make less mistakes when specifying the various layers of a neural network without compromising performance.

From the readme:

Here is a major workflow
Train your Deep Learning model using any major framework such as PyTorch, Apache MXNet or TensorFlow
Use TVM to build optimized model artifacts on a supported context such as CPU, GPU, OpenCL, Vulkan, VPI, ROCM, etc.
Deploy your models using Rust :heart:


I'm not a programming guru so I apologize if this is a naive question, but aren't most of the ML libraries in Python built on top of c++ linear algebra libraries? So outside of a preference for Rusts syntax, is there really that much overhead when it comes to Python? I understand the language itself has overhead, but as far as the training process would there be a noticeable difference between Python and Rust? I know now with CUDA and cudnn most of the loss calc and optimization is just happening directly on the GPU, so my thought is that they would perform similarly if not identically when it comes to the learning process. Again I am not a computer science expert, especially when it comes to low level stuff, so I'm curious if Rust would really be that much more efficient and, if so, why?

1 Like

You are correct, if most DL projects are just using c++, fortran, cuda, etc... then in many cases simple benchmarking is a wash. Mature and well-known ecosystems will be preferred (e.g., c++ and python). For standard, boring, vanilla deep learning (DL) or machine learning (ML) projects this is not a problem. The problems we are starting to see now are ecosystem and compiler related.

Not much in the DL world is standard nor vanilla... it's evolving rapidly and like much of ML and mathematical computing in general the working state of an ecosystem for rapid development is an issue. Getting to the "training" or "solver" part of your project is typically the last step in a long process. Additionally, new problems or domains may need slightly different approaches... requiring you to extend on the platform that is already there. Having to context switch, or pass requirements to another team, in order to update an existing c++ API to support your needs is time consuming. Additionally, IMO, we're limiting the potential of contributors by limiting the stack to python and c++, both in terms of company diversity (I don't want to use python nor c++ for deep learning at my company) but also in terms of personal diversity (I have to be proficient in both python and c++, and I don't want to). I'd rather just do it all in Rust... and exposing a python or go API is a great addition, not a necessity. So from my perspective, democratizing AI is about having mature high quality software in a number of programming languages.

In short, it's nice to have all parts of your project in the same language: fast, concurrent, parallelizable. These are nice but not necessary. As it turns out, what everyone is finding necessary: having the same richness of types and expressivity from the high level API all the way to the compiler. We need an ecosystem that can support a rich set of types that can easily be swapped between projects. Vanilla Python world supports this (scikit-learn is a bunch of projects with common interfaces), but it's not guaranteed to hold when one needs to reach for cython or numba, or call down to a c++ API....

Chris Rackauckas (author of Differential Equations projects in Julia) has a good blog post about this, why-numba-and-cython-are-not-substitutes-for-julia, inspired by all the times he had to answer this exact question. It is just as relevant to Rust's "zero-cost abstractions" as it is to Julia's "two-language problem"... (a philosophical side-note: solving the same problem in different programming languages can often yield different and interesting solutions... so restricting DL to python/c++ restricts, ironically, the solution space for ML itself).

For example, if you want to make Python really fast use numba. But numba achieves its speed by restricting Python's type system, Or, in tensorflow (TF), the low level graph API supports a very limited type system (TF core framework types protobuf schema). In short, the expressiveness of Python must be limited in order to make it fast (see rackauckas' blog post above for detailed examples).

More specifically, we want the ability to pass our custom types to the compiler itself. This is more difficult in a 2-language setup. And this is exactly why TF started the tensorflow for Swift project. It is not a language binding for Swift, it is basically a new project, compiler and all, in Swift. No more "two-language" problem [Julia has been doing the same thing too, Flux.jl and Zygote.jl are ahead of all the big innovators here:]. In fact, Rust was one of the languages Google TF "evaluated" for the project [double quotes here because once it was known that google hired lattner to lead the team it was pretty clear he was going to pick Swfit :wink: .. TBH i tried using swit and the tooling, debugging, math support, and depndencies on ObjectiveC are super disappointing... it kinda blows my mind they passed on Rust :frowning: ; e.g., the implication being Swift is more "useable", easier to learn, and more "pythonic" than Rust... :man_shrugging: ], but it shows, and they discuss this, that Rust has all the technical merits for building projects like TF: WhySwiftForTensow-Which languages fit our project requirements. Swift team are to the point now where they want to build auto-differentiation (a key piece to current gradient based DL) directly into the compiler: Swift AutomaticDifferentiation Proposal.

Much of this work could be happening in Rust, it's just a matter of focus, people, community... but also a mega-evil-corp backer... or something like Julia: they have a small company called JuliaComputing focused on generating money through consulting and applying for grants, and then using revenue to support open-source.

You can google "two-language problem julia" for some good presentations. Julia is not the only one who is solving the "two-language" problem. In fact, Rusts "zero-cost abstractions" is a version of that.

Python/c++ machine learning projects are here to stay. It's important to note they are funded by Google (TF) and Uber (PyTorch), as well as others. They are not trivial pieces of software. The likelihood Rust finds its footing soon is small, due mostly to funding. People want to get paid, and few companies have the resources to fund R&D like deep learning... and those companies are entrenched in c++. But, good consistent work in Rust for mathematics, algorithms, and data engineering is happening and once (or if) some major coporations show enough big backing in rust it's easy to see rust making its way into more of the machine learning stack!


Someone should work on "Rust for TensorFlow" modeled after "Swift for TensorFlow" project. This would be a great crowdfunding project if the team could be assembled.