Parallel non-blocking access to vector elements?

Hi everyone,

is it possible to access a vector's elements from multiple threads with locking on an element level (akin to Arc<Vec<Mutex<_>>>) instead of the vector level (Arc<Mutex<Vec<_>>>)?

I have seen How to access vector from multiple threads without mutex? but the solution provided makes use of Atomic*, which does not work in my case, because the elements are non-trivial structs.

I assume, there is some solution using AtomicPtr, but I can't figure out the implementation. I would be really grateful for a hint.

Kind regards

Maybe you want arc-swap?

You can definitely use Mutex as in your example. You may want to look at RwLock as well, for shared read acces. It's a lot like a thread safe RefCell. There's also AtomicCell in crossbeam, for when your data is still small and simple enough to not need a reference. A bit like Cell.

There are other options as well, depending on what you need, and some are more advanced (or spooky, like ghost cell).

Edit: even if you can't wrap your element type in an atomic cell, you may still be able to use atomics internally. Provided it's a type you control, of course.

1 Like

If you want to run something over all the element of the vector in parallel you can use the rayon crate that provides a safe API to access the elements of the vector in parallel and handles the synchronization for you.

use rayon::prelude::*;

fn main() {
    let mut a = vec![0, 1, 2];
    a.par_iter_mut().for_each(|a| {
        *a += 2;
    });
    dbg!(a);
    // Shows [2, 3, 4]
}

Since &mut T: Send iff T: Send, you can just split the borrow and process different elements in different threads: Playground

fn main() {
    let arr: &mut [_] = &mut [1, 2];

    let (head, tail) = arr.split_at_mut(1);

    std::thread::scope(|s| {
        s.spawn(|| process(head));
        s.spawn(|| process(tail));
    });
}

fn process(arr: &mut [u64]) {
    dbg!(std::thread::current().id(), arr);
}
1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.