Data share between one writer thread and many reader thread

Thanks a lot. And apologize for my poor English.

There should not be a solution without locker using rust currently.

As suggested, the RwLock is the best choice for now. What I can do, just as @SkiFire13 said, split the struct to many small struct, and reduce the data race chance.

Right, but what I'm not clear on is what your actual case is. Do you want a guarantee that each version you read was written in full by a previous writer, but you don't care whether you get the latest write, or a stale version, or are you OK with the writes not being atomic?

You said that you're OK with the writes not being atomic, but now you're saying that you need a lock or similar to make the writes atomic.

If you're OK with stale data, a simple RwLock<Arc<Data>> can be all you need - the lock is then held for a very short time by readers cloning the inner Arc, and thus the writer is not slowed down. For some types, this can be replaced by AtomicU64, AtomicI32 and similar types, where the hardware supplies the locking for you; for Copy types, RwLock<CopyType> is also possible, and you copy the data when you're reading it:

pub struct QuickLock<DataType> {
    data: RwLock<Arc<DataType>>
}

impl QuickLock<DataType> {
    pub fn new(init: DataType) -> Self {
        Self { data: RwLock::new(Arc::new(init)) }
    }
    pub fn read(&self) -> Arc<DataType> {
        Arc::clone(&*self.data.read().expect("poison"))
    }
    pub fn write(&self, data: DataType) {
        *self.data.write().expect("poison") = Arc::new(data);
    }
}

impl QuickLock<DataType: Copy> {
    pub fn copy(&self) {
        *self.data.read().expect("poison")
    }
}
2 Likes

Thanks.

With all of your help, I can improve my original design really.

arc-swap can provide this design but faster, by replacing the locking with atomic operations (the Arc<Data> pointer is atomically swapped when written, and atomically read when read).

3 Likes

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.