Do i need a (c/c++) volatile variable in rust

I need to share simple variables and message queues among threads. Certain threads are only going to read it and a few only write to it. So does using Arc<Mutex<SomeType>> imply both, atomicity (ie., only one Reader/Writer at a time) and all pending reads/writes are flushed to RAM (not cached or something if the compiler thinks that the variable is only being read in that thread) ? Or do we requre both volatile and mutex on the vaiable ? If the answer to the last question is true then what is the equivalent of volatile in rust ?

Using Arc allows you to Sync things between thread barriers, internally it uses atomic reference counts to ensure that whatever you shared outlives all the threads that use it. The Mutex class actually synchronizes access and uses actual locks to do it, so you don't need to impose additional memory ordering. For simple values you may want to use an atomicUsize / atomicIsize / atomicReference / atomicBool depending on your use case.

Thanks for that. I'm still a little confused. When you say:

I was not concerned with ordering. For that mutexes will do fine. In c++ too volatiles prevent ordering only with other volatiles (operation on volatiles will occur in sourc-code order), but may be re-ordered w.r.t non-volatiles. What i am more concerned about is the flushing part. volatiles essentially prevent caching and ensure flushing to RAM. Do rust Mutexes do the same (for both complex and simple types) ?

e.g.
Arc<Mutex<bool>>
Arc<Mutex<Vec<u8>>>

is it ok to do:

while vect.lock().unwrap().len() == 0 && boolean.lock().unwrap() { .... }

Or these need to be marked volatile separately too ?

Actually the ordering I'm talking about is related to caching. In fact, volatile will not prevent caching; it just inserts a memory barrier that multiple threads will sync their caches on.

Mutexes are Mutually Exclusive. Reads and writes across threads to a single value will be 100% synchronized with each other. One thread reading the value after another has written to it will see the latest value; one thread attempting to read or write the value while another one holds the lock will block until that lock is released.

This is, naturally, much more expensive to do than atomic reads and writes because it requires a syscall and context switch.

Mutex locks are not synchronized with each other, no. You probably want to put both values together into one struct and place that in the Mutex. Then you can lock both values at once.

The C/C++ volatile keyword is explicitly not designed for concurrency related use cases: Should volatile acquire atomicity and thread visibility semantics?.

2 Likes

If you have many reads and just a view writes you might also consider to use RwLock instead of a Mutex.

Volatile only implies a memory barrier in Java. In C it does prevent the compiler from reordering the accesses, but not the CPU, so using it for synchronization is generally unportable if not outright wrong (you can often get away with it on x86, but e.g. ARM has a weaker memory ordering model.)

1 Like