Using rayon to implement some common parallel patterns

Hello!
I am doing a project where I try to identify parallel opportunities in rust code automatically and then generate recommendations to implement them with known patterns. I orientate myself on a previous project that auto generates OpenMP directives for example above the loop head for doall/reduction patterns for example. I am trying to do this with rayon for rust.

in particular, I want to use rayon to implement a simple loop reduction pattern.

for i in 0..iterations {
        a = a + 1; 
        b = b * 5; 
        c = c - 2; 
        print(a)

    }

for example in above loop in OpenMP this could be trivially parallelized with a

#pragma omp parallel reduction(+ :a) reduction(* :b) reduction(- :c)

with rayon I would do it with one variable and generate a range


let sum = (1..iterations) 
    .into_par_iter()
    .reduce(|| a, 
        |_, _| a + 1); 

however I had trouble doing it with multiple variables and one iterator. would I need to have the above piece of code for every variable that I want to reduce? as I understand, I cannot return a tuple from a one dimensional collection that was turned into a parallel iterator?

additionally, what if I want to execute some other code within each iteration like the print in the above example. would I chain a foreach after the reduce?

another issue I have faced is when I try to write to an array inside the closure of the foreach I get the error:

let mut 
(0..arr.len()).into_par_iter()
    .for_each(|i| arr[i] = 0 );

closures cannot mutate their captured variables

where I am trying to imitate the simple

#pragma omp parallel
for(int i = 0; i<n; i++9 {
arr[i] = 0;
}

no race condition could occur with this example because there is no inter loop dependency, no two threads would manipulate the array at the same index.

what works here is

arr.par_iter_mut()
    .for_each(|x| {
        *x = 0;
    });

but again, what if I want to use multiple array like in the swap example, what if I need the index of the current element?

is there a way I can achieve what I am trying to do or do I need to rethink this implementation?

as I understand, rayon is particularly useful when you have a collection you want to iterate over, however in my case I am mostly iterating through looks with an index to achieve all sorts of things not just collection processing.

thanks!

One thing to notice is that while your goal is to enable parallelism, Rayon's ParallelIterator is much like std's Iterator, and you'd need to do the same code transformations if you asked: how can I turn a loop based on mutating variables into an iterator composition that doesn't use any mutable variables?

The answer to that question will depend on the actual data-dependencies in the code — the sort of thing that toy examples don't necessarily present accurately.

For example, this code doesn't use b or c so it is unclear what transformed code should be produced for them. Supposing that we want the final values of a, b, and c, and that the inputs to the reduction actually depend on something computed based on the input, here's a Rayon analogue:

use rayon::prelude::*;

fn main() {
    let iterations = 50;
    let (a, b, c): (i64, i128, i64) =
        (0..iterations)
            .into_par_iter()
            .map(|_| (1, 5, -2))
            .reduce(
                || (0, 1, 0), // identity elements of the reductions
                |(a1, b1, c1), (a2, b2, c2)| (a1 + a2, b1 * b2, c1 + c2),
            );
    dbg!(a, b, c);
}

The .map() closure would compute whatever actual information a, b, c should be reducing.

what if I want to use multiple array

You can par_iter_mut() several arrays and zip the iterators together to process them in parallel. (Note that this only works on indexed parallel iterators — those which Rayon understands the length of and can subdivide into predictable-length parts. Array/slice/vector iterators are always indexed.)

what if I need the index of the current element?

Just like non-parallel iterators, call enumerate().

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.