Rust advantage over C and Python in a parallel approach


First of all, hi everyone, i am new to this forum and i am just starting my journey with Rust :slight_smile:
I’m preparing project for “Parallel and Distribute Programming” subject on my university. My main goal is to point out area’s where Rest is better than C, Cpp or Python (in multithreading, paralleismy etc.) with providing some examples. The purpose of this is to convince my lecturer than we could include Rust in languages we are using at the Uni, especially at this subject.
As I am just starting to “Rust around”, I will appreciate any suggestions, links, idea’s etc.
I know, for instance, that in Rust we must not worry about data races, but not sure which algorithm or task implement for example to made significant difference from C or Python implementation.

Thanks in advance for any help,
young Rustacean


In my experience, there are two different ways to advertise a parallel programming environment, which can be combined for optimal effectiveness:

  • Show how incorrect constructs can be detected at compile-time or run-time instead of remaining dormant in the code until a race exposes them. One way to do this is to take textbook examples of incorrect parallel programs (e.g. the classic shared integer counter data race) and show that the error is automatically detected.
  • Show how easy your environment makes it to write parallel programs. In a Rust context, demonstrating use of Rayon for data-parallel computations on a container would probably be a good example.


Thanks for suggestions, I am checking Rayon right now, looks like a great place to start :wink:


I’ve had a great experience with the latter of the two points by @HadrienG, part of which is the non-appearance and lack of fear of point 1. I’ll share a little bit of my experience:

I work with a lot of numeric stuff – image reconstruction and imaging physics simulations. I’ve recently put together a very simple numeric utilities library (e.g., containing basic functions like linspace, nearest-neighbor and linear interpolators, etc.) as well as some basic reconstruction and simulation codes for the imaging stuff I work on. I wrote the serial versions first. Later when I wanted to parallelize, it really was surprisingly painless to implement parallelization with a lot of “it just worked.”

I first used the scoped threads of crossbeam (manual parallelization), mostly safely chunking mutable vectors using e.g., .chunks_mut before dispatching each to a thread to be updated via in-place mutation. Manual parallelization can be nice when you know all of the parallel computations are going to take the same amount of compute time and you don’t have to worry about conflicting levels of parallelization in your call hierarchy.

But moving on to rayon, however, was really awesome. Some of my stuff did work with simple par_iter drop in replacement, but it’s also really straightforward to use the raw fork-join mechanism with .split_at and friends for cases that don’t quite fit into this paradigm immediately (e.g., in my linear interpolation implementation).

Finally, for what it’s worth: my typical setup with this stuff is to have a Python layer grab configuration and do other I/O, then call into the Rust code for all of the computation (using cffi and Rust compiled as a dynamic library). Visualization etc., is then easily performed back in Python. It’s been a nice experience. I add this since you mention already working with Python and C.