How should I pair read_volatile and write_volatile?

Hi guys, can someone explain me how is correct to pair a read and write with volatile semantics?

For example if I have a pointer some_ptr of type *mut u64, and I do
write_volatile::<u64>(some_ptr, 100u64)
is it safe know to do
read::<u64>(some_ptr) or I must do read_volatile::<u64>(some_ptr) ?

Also what about the reverse case if I do read_volatile::<u64>(some_ptr) do I need to use write_volatile::<u64>(some_ptr, 100u64) or I am fine by simply using write<u64>(some_ptr, 100u64)

Obviously the read and write operations are happening on different threads.
Thank you very much!

If you use volatile semantics, you are only allowed to use volatile semantics. You can't mix the two.

Hm, are you sure that you need volatile here? It seems like you actually want to use atomics. Rust's volatile_(read|write) do not have any concurrent semantics I think(docs):

Just like in C, whether an operation is volatile has no bearing whatsoever on questions involving concurrent access from multiple threads. Volatile accesses behave exactly like non-atomic accesses in that regard. In particular, a race between a read_volatile and any write operation to the same location is undefined behavior.

1 Like

Thanks for your suggestion, but I did not ask how I should do it better, neither what options I have available. I only wandered about the semantics of mixing read/write with read_volatile/write_volatile. From yours and KrishnaSannasi answers it looks that I shouldn't mix them.
Thank you guys!

If you're going to face the deep intricacies of language memory models with volatile and such, you may want to go beyond a basic understanding of the rules (as we currently know them at least, given that the Rust memory model isn't fully finalized yet).


The compiler's optimizer can almost freely add, remove, split, merge and reorder non-volatile accesses (including with volatile accesses), if it can adjust the surrounding code accordingly so that the change is not visible to the current thread. The only thing that it's not allowed to do is to add writes, if there is any chance that they could be observed by other threads in the same process (e.g. when the data is accessed via a raw pointer of unknown origin).

What volatile gives you on top of that is that volatile accesses will not be removed by the compiler (nor added, as long as you don't give the compiler permission to add more by using non-atomic accesses or constructing Rust references to the data), and are dispatched to the CPU in the order where they are listed in the code, possibly with some tearing if the volatile accesses are too large for native CPU load/store instructions.

This is why you usually don't want to mix volatile and non-volatile accesses.


There are data wrappers out there which force every access to be volatile, like VolatileCell. They can be useful when you want to avoid easy mistakes like taking an &T or &mut T to a "volatile" memory location. Rust references come with very strong invariants, and the mere action of constructing one allows the compiler to do and assume things which are inappropriate in memory-mapped I/O use cases, which are what volatile is about.

If your use case doesn't involve some form of memory-mapped I/O, you likely want atomics, not volatile, as they 1/provide other guarantees (non-tearing, memory barriers...) that are vital for thread synchronization, which are the reason why non-atomic concurrent access to memory is UB; and 2/reduce the degree to which the compiler is forced to pessimize memory accesses by assuming that something somewhere may concurrently and unpredictably access the target memory location.

Volatile is not a superset of atomic and it's a common mistake to use volatile for cross-thread synchronization, which is why @matklad thought it could be worth pointing out.

4 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.