Passing slice into a thread 2

Hello everyone,

This topic is an extension of this topic :

This time, I would know how to do the same but using Arc. So I tried that :

use std::thread;
use std::sync::Arc;

fn main() {
    const N: usize = 1000;

    let mut a: [f64;N*N] = [0.0;N*N];

    let (a_1, a_2) = a.split_at(N/2);

    let a_1_arc = Arc::new(a_1);

    let t = thread::spawn(move || {
        let a_1_bis = Arc::try_unwrap(a_1_arc).unwrap();

        println!("{}", a_1_bis[0]);
    });

    t.join().unwrap();
}

And I have always the same error :

9  |     let (a_1, a_2) = a.split_at(N/2);
   |                      ^
   |                      |
   |                      borrowed value does not live long enough
   |                      cast requires that `a` is borrowed for `'static`

How to use Arc correctly to solve this ?

Another question because that confuse me, in the book here : Shared-State Concurrency - The Rust Programming Language

You have the following code :

use std::sync::{Mutex, Arc};
use std::thread;

fn main() {
    let counter = Arc::new(Mutex::new(0));
    let mut handles = vec![];

    for _ in 0..10 {
        let counter = Arc::clone(&counter);
        let handle = thread::spawn(move || {
            let mut num = counter.lock().unwrap();

            *num += 1;
        });
        handles.push(handle);
    }

    for handle in handles {
        handle.join().unwrap();
    }

    println!("Result: {}", *counter.lock().unwrap());
}

And I don't understand how we call the line :

let mut num = counter.lock().unwrap();

because the type Arc does not have method lock, it's a Mutex method...

Thank you !

To your first question: the heart of the issue is that what you're storing in your Arc<> is not owned data, it's a borrow from the array owned by the main thread (i.e. an &[f64]). If the main thread drops the array, that borrow will still become invalid, which is what the borrow checker tries to prevent here.

As long as you are not writing to the array, you can put the array itself in the Arc, and only split it inside of individual threads:

fn main() {
    const N: usize = 1000;

    let a = Arc::new([0.0f64; N*N]);

    let a_arc_1 = Arc::new(a);
    let a_arc_2 = a_arc_1.clone();

    let t = thread::spawn(move || {
        let (a_1, _a_2) = a_arc_1.split_at(N/2);

        println!("{}", a_1[0]);
    });
    let (_a_1, _a_2) = a_arc_2.split_at(N/2);

    t.join().unwrap();
}

(Note that you don't need to unwrap the Arc. Just use it directly as if it were a shared borrow of the underlying array. See below for the answer to your second question.)

...however, if you do intend to mutate the array in both threads, then you will end up in trouble, because you're trying to share a mutable variable between two threads, and Rust won't let you do so without significant code contortions due to the risk of data race and pointer invalidation. This is what I alluded to as "shared mutability complications" in your earlier post.

What happens next depends on what you are trying to do:

  • If you intend to join the worker thread afterwards anyway, the simplest solution by far is to use scoped threads. It will let you use borrow-based abstractions like [f64]::split_at_mut(), which will greatly simplify the kind of code that you are writing here.
  • If you don't actually need both threads to share data, you can create one array of size N*N/2 (beware the rounding) inside of each thread and have your worker thread return its private array at the end. The main thread can then get it back by putting something like a let t_res = t.join().unwrap() at the end.
  • If you don't need both threads to access the data at the same time (it seems you do need them to here), then you can put the data in a Mutex and lock it before access. You will then end up with an Arc<Mutex<[f64; N*N]>>.
  • If you are implementing your own synchronization protocol and are 100% sure that it is correct, then you can go lower-level and use an UnsafeCell instead of a mutex. By doing so, you make yourself responsible for upholding Rust's invariants (such as "no data race" and "no multiple &mut to the same memory region" in this particular case), and then everything becomes full of scary and hard-to-use unsafe constructs. This is what Mutex does internally.

As for your second question...

Arc<T> implements the Deref trait with target T. When a type implements Deref<Target=T>, it behaves like an &T. So for example, if x is an Arc<T>, then *x will refer to the underlying value (which is of type T), and x.do_stuff() will call the T::do_stuff() method of the underlying object if no such method exists on the Arc<_> wrapper itself.

In general, Deref is a neat trick for making "smart pointer" types like Arc or Box feel more like the underlying object that they own. In this context, it behaves like the operator-> of C++ if you are familiar with that language.

There is also a DerefMut trait which does the same thing, but for &mut.


Oh, one more thing. As currently written, your function allocates an array of 8MB on the stack of the main thread. This will overflow the stack and cause the program to crash at runtime on many operating systems' default configuration. You may want to allocate such large arrays on the heap (e.g. by using a Vec instead of an array).

1 Like

Thank you for your detailed answer !

In fact I just want to understand the Rust's multithreading features, this is not a code of serious project.

Oh okay, such a good idea, I understand now !

The answer of my second question is clear, thank you !

Yes I know for the stack overflow, I saw this and I can solve it by change my stack size with ulimit or using arrays stocked into heap.

Have a nice day !

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.