Bad Example that wrapping AtomicUsize with Arc

I happened to notice that, currently, official example use Arc<AtomicUsize> to describe the usage of Atomic type, which is redundancy since Atomic type should have no issue with sharing, wrapping it into an Arc is unnecessary.

Is it better to modify its implementation for modern Rust?

IMHO, thread::scope should be introduced rather than Arc.

Old example:

use std::sync::Arc;
use std::sync::atomic::{AtomicUsize, Ordering};
use std::{hint, thread};

fn main() {
    let spinlock = Arc::new(AtomicUsize::new(1));

    let spinlock_clone = Arc::clone(&spinlock);
    let thread = thread::spawn(move|| {
        spinlock_clone.store(0, Ordering::SeqCst);
    });

    // Wait for the other thread to release the lock
    while spinlock.load(Ordering::SeqCst) != 0 {
        hint::spin_loop();
    }

    if let Err(panic) = thread.join() {
        println!("Thread had an error: {panic:?}");
    }
}

The one I proposed:

use std::sync::atomic::{AtomicUsize, Ordering};
use std::{hint, thread};

fn main() {
    let spinlock = AtomicUsize::new(1);

    thread::scope(|s|{
        let thread = s.spawn(|| {
            spinlock.store(0, Ordering::SeqCst);
        });

        // Wait for the other thread to release the lock
        while spinlock.load(Ordering::SeqCst) != 0 {
            hint::spin_loop();
        }

        if let Err(panic) = thread.join() {
            println!("Thread had an error: {panic:?}");
        }
    });
}

Is my example better?

Currently I have difficult connect to github. Thus if my example is better, pls create a PR for me. thx.
A PR is created. Currently I have no difficult clone the repo, but when I tried to view a file with my firefox, github would return a 500 error. I suppose it is something due to my nginx reverse proxy, but I don't know how to fix it.

1 Like

The argument is implying some possibly wrong assumptions:

This sounds like it implies “if a type has issues with sharing, then Arc could help”. This is not the case, at least if “issues with sharing” is supposed to be related to a Sync implementation in particular, as using Arc<T> does require T: Send + Sync, so Arc<T> will never indicate that the type T was in any way problematic to share.

Maybe you’re thinking of Arc<Mutex<T>> which is a useful approach to “help out” a type with limited support for sharing (only T: Send, but not T: Sync), besides the obvious use-case of offering &mut-reference access when some API you want to call requires that. On that note, for many Arc<Mutex<T>> cases, the argument could be made as well that the Arc is often not necessary, i.e. in cases where you can use thread::scope or something similar.


The reason the example contains Arc is likely mainly that the example was created before thread::scope existed, which is still relatively new.

The question whether the example would be better off using thread::scope instead of Arc depends a bit on whether that change would be assumed to make the example easier to understand. On one hand, it reduces the need to understand Arc, and it reduces the potential to associate Arc with AtomicUsize (or atomics in general) too strongly, as that could imply that those ought to always be used together.

On the other hand, thread::scope is the more complex to use API than thread::spawn in some ways. For once it’s still fairly knew, so probably less well known, it also involves an extra closure which has to be understood that it’s executed directly on the current thread and not itself spawned in any way, the s variable’s meaning might be hard to guess, the distinction between things declared inside and outside of the ::scope call is something to be learned, too; the typical learning material (the book) does not cover thread::scope yet either, last time I checked.

Of course for practical use-cases thread::scope should be preferred when it’s usable, as it’s more structured and less overhead (I’d believe) than using those Arcs, but for an example, clarity is important, so there’s a trade-off to take into consideration, and while I personally love the thread::scope API, I’m not certain which version of the example code would actually make the better example.

(One thing to note with using thread::scope is that calling thread.join only for checking for panics, is somewhat redundant because the scope will wait for the thread, too, and “handle” panics (by re-throwing them.)

4 Likes

This is why I want to introduce a more recent example.

Your clarity notify me. I could provide a better example, not just rewrite the old example in thread::scope.

Example: calculate vec.iter().sum() in parallel.

use std::sync::atomic::{AtomicUsize, Ordering};
use std::thread;

fn main() {
    let data: [usize; 20] = b"Fearless Concurrency".map(Into::into);

    let result = AtomicUsize::new(0);
    // Create a scope to process sum in parallel.
    thread::scope(|s| {
        // choose the size (data.len()+N-1)/N to divide the data into N chunks
        // this is actually data.len().div_ceil(N) but currently it is an unstable feature
        for chunk in data.chunks((data.len()+2)/3) {
            s.spawn(|| {
                let sum = chunk.iter().sum();
                // Accumulate results
                result.fetch_add(sum, Ordering::SeqCst);
            });
        }
    });
    assert_eq!(result.into_inner(), data.iter().sum());
}

Is there any shortadvantages adding such case to the doc?

Note that when div_ceil is stable we should prefer that for clarity.

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.