How Can I Learn AI with Rust?

Hello, forum members! I have a question about learning AI using the Rust programming language.

I previously learned Python and machine learning during my time at Microsoft in 2022, but I find it quite challenging to fully grasp the intricacies of AI and machine learning. Since I am familiar with Rust and find its performance and safety features appealing, I would like to know if there are any resources or recommendations for learning AI concepts with Rust.

Could anyone here share their experiences, tutorials, or any other helpful resources that could guide me in this journey? I would greatly appreciate any advice or pointers you could offer.

Thank you in advance for your help!

This is a good place to start:


It might help if you specify what you mean by AI, and also what is your main objective. If your main objective is to learn concepts of say, deep learning, you may be better off using PyTorch, where there is excellent documentation for beginners, and the language does not get in the way.

1 Like

Rust does not make ML concepts any easier to grasp than Python.

Python's advantages are:

  • It's the default language. Lots of examples and tutorials, and even whole courses, exist.
  • You can get cloud instances with accelerators (~= GPUs) for free, so experimentation and sharing are easier.
  • Notebooks make data visualization easier. One of the challenges with ML is being able to tell what the heck is going on with your model.

Rust's usual advantages, safety and performance of CPU-bound code, are unimportant in ML, since Python is memory-safe and can run on specialized hardware. Running ML models on CPUs is simply too slow.


Another usual Rust advantage is the rich type system. But I suspect this doesn't help much in machine learning either, since our carefully human-crafted abstractions about the problem space are the very thing ML does away with.


Is the website user-friendly for beginners starting from scratch? I visited the website, but I couldn't understand it.

I am a beginner starting from scratch to learn AI, but I am confused about which direction to pursue. Do you have any knowledge about machine learning? Thank you for mentioning Deep Learning. I will try to learn PyTorch starting from scratch.

Thank you for the two messages, but I'm not sure I understand them fully. Let me think about it. Perhaps it's not very useful for me to learn AI and Machine Learning in Rust? I hope I haven't misunderstood your intention. I don't know how I can learn AI and Machine learning starting from scratch.

I checked online that AI has seven main directions, including Machine Learning, Deep Learning, Natural Language Processing, Computer Vision, Robotics, Knowledge Representation & Reasoning, and Data Mining & Analytics.

In my opinion, if you are trying to learn about machine learning (which is what nowadays most people mean by AI) from scratch, use Python (for the reasons jorendorff mentioned in his first post).

Pick a field of application of your interest (e.g., robotics, NLP), and start from there. Try to solve a simple problem on that field following a tutorial.

Alternatively, just start like most people do: follow an image classification tutorial. That is the “hello world” example of machine learning.

1 Like

My plan is to get a grip on out how to write a neural network from scratch and get the back propagation and training working. Using Rust of course.

Your question prompted me to look around and I was pleased to find this series on exactly that: Deep Neural Network from Scratch in Rust 🦀- Part 1- Basics of Neural Network | by Akshay Ballal | Medium

Of course that is unlikely to be of practical use.

1 Like

Okay. Thank you. I was afraid that perhaps mentioning Python might be inappropriate in the Rust Forum.

When I tried to learn Machine Learning with Python in 2022, I felt it was very difficult, and I couldn't get it.

I don't know how to learn Machine Learning from scratch, but I want to know if I can do it.

Okay. Thank you.

No reason you can't use rust if you are going back to scratch.

You may be interested in Hinton's coursera course - which is free - and language agnostic.

Hinton is the researcher who proposed backpropagation - and ironically one of the people who now is the least happy with it. Because it is biologically implausible - it works, but is not how the brain works. There are alternatives to backpropagation.

If you dig around you'll see that Hinton does all his programming in matlab - won the Turing award, but to this day never bothered to learn python. I know he used C when he first started out - but I think he never looked back after matlab came along.

Deep learning is all about linear algebra - matrix multiplication, and farming out the work to available cores and GPUs. In practice, python (with numpy and pytorch) - has better support for that.

Also - if you use backpropagation to train your networks, you need automatic differentiation, because you can not do it by hand for large networks. There is one rust crate that does that (dfdx) - but again python has better support.

1 Like

I got it. Thank you very much.

When I was 15 I was into electronics and was totally fascinated by the article "The Search For Intelligent Machines (Do we want them and can we build them?)" in the 1971 edition of Wireless World magazine. Where perceptrons were described.

Nobody had computers at the time so if one wanted to play one would have to use analog electronics. That was all too much for me so I put it to the back of my mind.

Many years later (1990s) I read book about artificial intelligence that described Hopfield networks. I'm sure it included an algorithm for "Hopfield back propagation" but nowadays I find no trace of such a thing.

As such it amazes me that 50 years later the offspring of perceptrons burst on to the scene in such a big way in such a short time.

So the curiosity is still with me. If I can and achieve a simple network with few layers of few neurons that can be trained to do some simple thing, all built form scratch (no automatic differentiation), in Rust, I would be happy. I like to think I can handle the linear algebra, we programmed matrix multiplication algorithms in BASIC back in the late 70's.

Getting back to my roots I'd love to be able to build it as a network of operational amplifiers with resistor values taken from a trained model. But perhaps I'm dreaming.


Thank you for sharing your story. it is very interesting.

Hopfield Networks are similar to Boltzmann machines - they are still around. Autoencoders, Image recognition, Netflix recommendations.

You absolutly can do matrix multiplication "manually".
Even use rayon or some such crate to parallelize the calculations. There is a lot of "uncomplicated" opportunity for that. Won't be faster than python though :slight_smile: (Numpy relies on Blas/Libpack, old Fortran libraries, and automatically uses multiple cores when available).

What I like about Hinton's matlab code is you can see how it is working. If you do a Jupiter notebook tutorial with tensorflow/pytorch I think too much is hidden - you solve the problem, but avoid the nitty-gritty and so are left wondering how it worked exactly.

Ultimately I think we have to get back to "amplifiers and resistors" - it is probably the only way
to do energy efficient computing; E.g. the brain runs on ~20W - an NVIDIA GPU uses more than an order of magnitude more. Datacenters are expensive.

Not many people explore that, but I think it is picking up again, e.g. Intel has their Lohi chip:

Hinton himself has talked about "mortal computing" - analog computers. Energy efficient like the brain - downside is programs can not be copied from one computer to another. You have to "mentor" another computer to do the same task.

1 Like