Separate from soundness there is the question of "practically what could the compiler mistakenly think won't change program semantics but actually does?"
If you're using mmap for shared memory communication, there's no guarantee about how long it takes your process to observe writes from another process. In practice the OS and the CPU aren't going to create artificial delays, but scheduling can change whether your reading process sees a particular change on the next read or not until many reads later. You will already need to have thought about how the communication will need to avoid data races even before Rust's memory model comes into play. You could be using shared memory mutexes or you could be doing something lockless (e.g. depending on aligned writes being de facto atomic on your target architecture), but you have to have a scheme.
So say my reading process and my writing process both mmap the same region and both get a
&mut to the struct stored inside. AFAICT the fear would be something like the reader reads a field, memcpys data from elsewhere in the shared region, then reads the field again, and compares the two reads. Rustc/LLVM may decide to optimize away the comparison on the assumption that the field could not have changed in-between.
What is the implication of this?
If this is a "mutation count" field that the reader is checking to make sure the writer didn't update the other memory while it was being memcpy'd, this could break your synchronization scheme.
Theoretically the compiler could give you a stale value indefinitely. In practice this is only going to happen in small loops because registers are scarce and the optimizer wants to aggressively reuse them.
What would trick the compiler?
Cell would not fix this -- the compiler can definitely still determine you didn't use a reference to change the field in-between the reads assuming everything is in one function or inlined.
In practice an
Atomic field is going to prevent current compilers from assuming both reads must compare the same because current compilers don't have the global analysis pass that would be necessary to determine nothing else in the program ever writes to the atomic in another thread.
But only a volatile read really conveys the fact that it's expected the memory could be mutated by something entirely outside the process. But in Rust that causes a copy because Rust doesn't have a concept of volatile pointers/memory. For single field volatile reads this doesn't matter, but it could matter for a big memcpy. But if you do a volatile read in a tight loop, I bet LLVM turns it into a volatile memcpy for you. I also believe there is a volatile memcpy RFC.
If you really want to break the memory model, map the same region twice in the same process. AFAIK Cell, Atomic and volatile all don't technically address this case. But again volatile will almost certainly work in practice.