Questions about Send and Sync traits

I am going through Rust atomics and locks book where it builds spin locks.

Here is the structure:

struct SpinLock<T> {
    locked: AtomicBool,
    value: UnsafeCell<T>,
}

unsafe impl<T> Sync for SpinLock<T> where T: Send {}

Now I see the documentation says T is Send if it can be safely moved across thread and T is Sync if &T can be shared with threads concurrently.

It requires T be Send only and not Sync. Why?

but below requires it to be Send + Sync -

struct RwLock<T> { 
    state: AtomicU32, 
    value: UnsafeCell<T>, 
} 

unsafe impl<T> Sync for RwLock<T> where T: Send + Sync {}

Could anyone please explain with an example how these two traits change the behavior of implementers?

1 Like

SpinLock is a kind of mutex: only one thread can access it at a time, so it doesn't allow multiple &T to be accessed from different threads and hence doesn't require Sync. It does require T: Send though because with a &SpinLock<T> you can get a &mut T, which allows you to move the T, effectively sending it from the thread owning the SpinLock<T> to the one that took the lock.

The reason RwLock requires T: Sync is because it allows multiple threads to get a read lock (while SpinLock only supports exclusive locks), which allows multiple threads to get a &T, and this requires T: Sync.

6 Likes

Can you please also give example.

  • SpinLock<Rc<u8>> and RwLock<Rc<u8>> are not Sync because Rcs are never Send nor Sync
  • SpinLock<MutexGuard<u8>> and RwLock<MutexGuard<u8>> are not Sync because MutexGuard<u8> is Sync but not Send
  • SpinLock<Cell<u8>> is Sync and RwLock<Cell<u8>> is not Sync because Cell<u8> is Send but not Sync
  • SpinLock<u8> and RwLock<u8> are both Sync because u8 is Send and Sync
1 Like

With a mutex/spinlock, you can never have multiple threads accessing the value in parallel, so Sync is not required. However, it's still possible to first access it from one thread, and then another thread, so Send is required.

With an rwlock, read locks allow for multiple threads accessing the value in parallel, so Sync is required.,

6 Likes

You can also conceptualise mutexes as wrappers that add Sync to the contained value.

1 Like

@alice If you could answer followings as well

The main thing to understand is that when you access memory, you may see an old value. Atomic orderings help ensure that you do not see old values.

For example in this code:

fn get_data() -> &'static Data {
    static PTR: AtomicPtr<Data> = AtomicPtr::new(std::ptr::null_mut());

    let mut p = PTR.load(Acquire);

    if p.is_null() {
        p = Box::into_raw(Box::new(generate_data()));
        if let Err(e) = PTR.compare_exchange(
            std::ptr::null_mut(), p, Release, Acquire) {
                drop(unsafe{ Box::from_raw(p) });
                p = e;
        }
    }
    unsafe { &*p }
}

If relaxed is used, then let's say thread A does this:

// Thread A.
PTR.load(Relaxed); // returns NULL
p = Box::new();
*p = generate_data();
PTR.compare_exchange(NULL, p); // succeeds

and thread B does this:

// Thread B
p = PTR.load(Relaxed); // returns value from A.
println!("{}", *p);

then even though thread B saw the compare_exchange, that does not mean it saw generate_data(). When it reads the memory location inside the box, it might see an old value. That is, it might not see the return value of generate_data(), but instead see whatever unknown data was stored in that memory location prior to that.

However, if acquire/release is used, then becaus generate_data() happens-before compare_exchange() happens-before PTR.load() happens-before println!(), then with acquire/release it is guaranteed that thread B also sees generate_data().

2 Likes