I want to (unsafely) share a ring-buffer between multiple threads in a performance critical application. Only one thread (owner) will be allowed to write and all others will read. Race conditions will be avoided by external means even though they would be non-critical for my use-case.
I'd normally (in safe code) implement this with an Arc<Mutex<[T; SIZE]>> but this doesn't allow to read and write at the same time and has run-time overhead for every access. Also every clone allows for read and write operations which is undesired.
I thought about owning the buffer and unsafely hand out references, but I don't want dangling pointers - some Arc will be needed. Those, in turn, will prohibit writing to the buffer.
I guess one solution might be to unsafely create and store a &mut [T; SIZE] from an Arc<[T; SIZE]> for the owner and hand out clones to the readers.
I'm pretty unfamiliar with problems that may arise from circumventing the borrow-checker.
Edit: I guess I need something like this, but I don't know how to make it work.
use std::sync::Arc;
type T = i32;
struct Producer {
buffer: Arc<[T; 256]>,
write_handle: Box<[T; 256]>,
}
impl Producer {
fn new() -> Self {
let buffer = [T::default(); 256];
Self {
buffer: Arc::new(buffer),
write_handle: Box::new(buffer), // only works because [T; N] is copy
}
}
fn modify(&mut self) {
self.write_handle[0] = 1;
}
fn clone_for_read_access(&self) -> Arc<[T; 256]> {
self.buffer.clone()
}
}
fn main() {
let mut p = Producer::new();
assert_eq!(0, p.clone_for_read_access()[0]);
p.modify();
assert_eq!(1, p.clone_for_read_access()[0]);
}
I guess the array will be double dropped. A &mut to the Arc's content might help but gives me lifetime problems.
Can use somehow use Arc<UnsafeCell<[T; 256]>>? Seems that some kind of cell is necessary anyway, so that the shared references held by consumers don't allow compiler to optimize out any writes.
Instead of rewriting something you may use std::sync::RwLock
Which is what you are looking for one writer many readers: This type of lock allows a number of readers or at most one writer at any point in time. The write portion of this lock typically allows modification of the underlying data (exclusive access) and the read portion of this lock typically allows for read-only access (shared access).
So you are safe and performance is still very good
The documentation of UnsafeCell disallows to hand out &T and &mut T at the same time without describing what could happen. Is this only because of data races (then I'd be fine) or may other evil things happen?
It is because of the reference semantics. In short:
if there's an &T, compiler can optimize code thinking that the contents of T (except for any UnsafeCells) is immutable for the lifetime of the reference;
if there's a &mut T, compiler can optimize code thinking that this is the only way to change contents of T (again, modulo any UnsafeCell).
It means, at least, that mutations sneaked past existing reference can easily be optimized away.
A data race in a language that considers them undefined behavior is never fine. The compiler will simply not do what you think it would do. (Or worse, it will do what you think it'll do, but any seemingly unrelated to your code or the toolchain could change that.)
You mean I have to make sure that between every access where the contents might have changed in the background I have to go through some UnsafeCell to avoid wrong assumption by the compiler? That makes sense to me.
I was thinking that using UnsafeCell<[T; N]> would tell the compiler to not optimize any access to the array. But @newpavlov made me believe the scope of this assertion might be wrong.
As written above, the data race will be avoided by external means. A reader will never read a location at the same time where a writer might have write access to. But it's still a single array so the compiler cannot take care of this.
You have to make sure that while you have &mut reference to an element no one else keeps a reference to that element.
With UnsafeCell<[T; 256]> if you have mutable reference to contents of the array (it can be one element or whole array), no one else can have references to contents of this array. Note that this restriction is more strict than just "you can't write and read simultaneously". So if you want mutate one element and read from others in different threads, each element should be wrapped in UnsafeCell.
Reader does not implement Deref; that's the whole point of the newtypes, c.f. playground.
What I have described is the more general way to safely share stuff with an API enforcing what you asked for.
Rust may be great, but from the following list you can at most pick 3:
sharing / aliasing
not having this one is the case &mut _
with mutation (at least one writer)
not having this one is the case &_
without any kind of runtime check
not having this one is any solution from ::std::sync (or ::std::cell::{Cell, RefCell} for thread_local solutions)
safely
not having this one is the case of sharing *mut _ around, Ă la (unsynchronised) C
You can, of course, find more efficient "synchronisation runtimes" for your specific problem (in which case you will need unsafe to implement them, and have code reviews to verify the soundness of it)
This was the whole point of my question. I was asking how to implement it in an unsafe manner because it's impossible to implement safely. My external code takes care for the data-races, but I didn't want to break anything else.
Then you either directly carry a newtype with *mut [T; 256], and the right API that allocates and drops the buffer, or carry multiple array: Arc<[UnsafeCell<T>; 256]> around as said before, so that
you can read with:
unsafe {
*array.get_unchecked(<usize as From<u8>>::from(n)).get()
}
you can write with:
unsafe {
*array.get_unchecked(<usize as From<u8>>::from(n)).get() = ...;
}
gives you a *mut T, that you can dereference to read from or write to.
(Note that if your buffers are always exactly 256 bytes long then you can skip the runtime bound check on index access by casting an u8 index to a usize, as shown)
This may be true in assembler or in C/C++/etc., but it cannot be true in the case of Rust's LLVM backend unless you correctly inform the compiler on a per-item (or per-array) basis that temporal access optimizations relative to that item (or array) should be suppressed. Unless you wrap your data in UnsafeCell, the compiler is free to optimize in any way that is mathematically and logically appropriate given the constraints that you have given it.
When you skip the UnsafeCell or inter-thread locking, you are figuratively playing Russian Roulette. It might work in some cases and not in others, but you'll never be able to rely on the results, or that compilation on successive point releases of the compiler won't drastically change those results.
I already accepted UnsafeCell as the appropriate solution but it doesn't solve data races on its own. This is why I mentioned that I'll take care of this.