What does 'two or more threads concurrently accessing a location of memory' mean?

Quoting Races - The Rustonomicon

Safe Rust guarantees an absence of data races, which are defined as:

  • two or more threads concurrently accessing a location of memory
  • one or more of them is a write
  • one or more of them is unsynchronized

Now, when both threads are Rust, we can ask: is there some variable x: T where at some point in time, thread 1 holds a &x and thread 2 holds a &mut x.

However, when one thread is Rust, and the other thread is x86_64, what does "concurrently accessing a location f memory" mean?

In particular, I am curious bout the case where the location in memory is an aligned u8.

From the perspective of the x86_64 asm 'thread', it never "holds" a &mut u8 or a & u8; there is just a single load/store/mov instruction that either reads from or writes to the location.

This op either gets old value, gets new value, causes Rust to get old value, or causes Rust to get new value. Either way, there seems to be a 'strict ordering' between the x86_64 asm and whatever Rust does.

What does "concurrent access" mean when (1) the arch is x86_64, (2) the location stores a u8, and (3) the other thread is just x86_64 asm ?

I think the bit you should be focusing on is "concurrent access" and not necessarily the word "thread".

The Rust memory model doesn't care if a piece of memory is being accessed (load, store, mov, etc.) by another Rust thread, a future, or code written in some other language, it's still a data race if you aren't using some form of synchronisation.

This comment reminds me of an article by Ralf Jung. He was mainly focusing on uninitialized memory, but it applies equally to data races.

TL;DR: It doesn't matter that loading/storing a u8 is an atomic operation in x86-64. What is/isn't well-formed code is decided based upon the "Rust abstract machine" and not any particular hardware implementation or instruction set. The Rust abstract machine says you need to use some form of synchronisation when accessing that u8 (e.g. a Mutex or atomic instructions) otherwise you've got a data race.


This is already UB, since &mut means exclusive, and what you describe is non-exclusive.

What you want to discuss is multiple things all holding &UnsafeCell<T>.

Cell, AtomicU8, etc are all wrappers around UnsafeCell, as it's the only construct that makes it non-UB to modify something behind a &.


We are in agreement with this. I was pointing out: when the two threads are Rust, we can interpret it by asking this question. However, when one thread is Rust and one is x86_64 asm, what question do we ask ?

Is this true? Is there source code evidence that AtomicU8 uses UnsafeCell ?

Of course there is -- there's a [src] link for a reason. atomic.rs - source


I concede the point. :slight_smile:

How do we use this in practice? Is there any implications for mmap? Does this mean we should store a * UnsafeCell<u8> rather than a * AtomicU8 ?

There are some subtle points about the memory ordering of atomics. The beginning of a mutex lock region has "acquire" memory order, while the unlock of a mutex has "release" memory ordering.

Atomics are used to protect the value it holds, but the memory ordering of what happened before and after are adjustable parameters.

You can definitely talk about loads/stores in Rust without talking about &T/&mut T. Just use raw pointers. There are functions to read and write to a memory location using just a raw pointer. They also exist for atomic operations, although they are currently unstable intrinsics.

When the other operation happens outside of Rust, then whether the operation is atomic depends on what the other language says. If the other language is C, then C will define whether each load and store you can perform is atomic or not. If the other language is x86 assembly, then if the x86 mov instruction is atomic, then it would be ok to count it as an atomic operation.