I’ve had a great experience with the latter of the two points by @HadrienG, part of which is the non-appearance and lack of fear of point 1. I’ll share a little bit of my experience:
I work with a lot of numeric stuff – image reconstruction and imaging physics simulations. I’ve recently put together a very simple numeric utilities library (e.g., containing basic functions like
linspace, nearest-neighbor and linear interpolators, etc.) as well as some basic reconstruction and simulation codes for the imaging stuff I work on. I wrote the serial versions first. Later when I wanted to parallelize, it really was surprisingly painless to implement parallelization with a lot of “it just worked.”
I first used the scoped threads of
crossbeam (manual parallelization), mostly safely chunking mutable vectors using e.g.,
.chunks_mut before dispatching each to a thread to be updated via in-place mutation. Manual parallelization can be nice when you know all of the parallel computations are going to take the same amount of compute time and you don’t have to worry about conflicting levels of parallelization in your call hierarchy.
But moving on to
rayon, however, was really awesome. Some of my stuff did work with simple
par_iter drop in replacement, but it’s also really straightforward to use the raw fork-join mechanism with
.split_at and friends for cases that don’t quite fit into this paradigm immediately (e.g., in my linear interpolation implementation).
Finally, for what it’s worth: my typical setup with this stuff is to have a Python layer grab configuration and do other I/O, then call into the Rust code for all of the computation (using
cffi and Rust compiled as a dynamic library). Visualization etc., is then easily performed back in Python. It’s been a nice experience. I add this since you mention already working with Python and C.