But according to the docs, trying to lock more than once should panic. I don't want this, but surprisingly the above works. Any idea if I may use RwLock and Mutex safely for nested cases in a single-thread?
Is there any alternative that doesn't panic? It can be a type, anything other than RwLock
A possible cause of the issue is that the lock will panic if the number of readers overflows. From the source for the Unix implementation of read-write locks:
// Check for overflow.
if has_reached_max_readers(state) {
panic!("too many active read locks on RwLock");
}
Normally, this shouldn't be possible, but if you're acquiring millions of guards in a tight loop this could cause problems. I'm not sure if this is the only cause of panics, so maybe someone with more knowledge of the internals of how RwLock is implemented can chime in.
I just ran a test program to verify that this is the case. Here's the program:
fn main() {
let lock = std::sync::RwLock::new(0u8);
let mut guards = Vec::new();
loop {
guards.push(lock.read().unwrap());
}
}
After leaving it running for a little while, here's the result:
thread 'main' panicked at 'too many active read locks on RwLock', library/std/src/sys/unix/locks/futex_rwlock.rs:123:17
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
So that seems to be the main cause of crashes. It's still not "guaranteed" behavior, so I wouldn't rely on it.
Even if the implementation was guaranteed to panic, this wouldn't hit that case since it immediately releases the first lock. See @claytonwramsey 's example to hold multiple.
I was thinking of a DOM-like node implementation that allows for inner-mutation across threads. Right now though I'm currently trying to finish a copy of the TC39's Observable, which mutates an observer for subscriber cleanup.
It's not that locking more than once doesn't work. It's that locking more than once simultaneously doesn't work.
You aren't locking the lock simultaneously here. You are completely ignoring the first guard, which immediately unlocks before you acquire the second guard.
There is nothing non-deterministic here; this is guaranteed to work, it's a correct (albeit utterly useless) usage pattern for any lock.
In the following case, if I comment drop(observer);, I get an infinite loop, not a panic (since the guard is active). If that weren't the case, I could have simplified the following code further to omit observer for instance:
let observer = subscription.observer.read().unwrap();
if let Some(o) = observer.as_ref().map(|o| Arc::clone(o)) {
drop(observer);
*subscription.observer.write().unwrap() = None;
o.read().unwrap().complete();
}
I want to use atomic reference counting internally because the gc crate is thread-local and I want nodes and the reactive user UI components (which are not "nodes") to be shareable and mutable across different threads, including for the gaming API (which would use concurrent ECS similiar to the Bevy engine, but referring to the graphical nodes API), so Node stores an Arc inside and WeakRefNode stores an Weak inside (where NonRefNode is an internal type that is not exposed to the developer).
use std::sync::RwLock;
let lock = RwLock::new(5);
// many reader locks can be held at once
{
let r1 = lock.read().unwrap();
let r2 = lock.read().unwrap();
assert_eq!(*r1, 5);
assert_eq!(*r2, 5);
} // read locks are dropped at this point
// only one write lock may be held, however
{
let mut w = lock.write().unwrap();
*w += 1;
assert_eq!(*w, 6);
} // write lock is dropped here
Can you explain how it is guaranteed that the first lock will be unlocked before the second lock will be acquired? There is no end of a scope and I don't know how the guarantees for drop code look like for data that isn't bound to a variable.
Is there anything wrong with that? Since you are holding on to the first lock, you are using the lock recursively in this case, which then doesn't have to work. Whether it panics or deadlocks or livelocks doesn't really matter.
Ok, and I guess temporaries get dropped immediatly?
Also it's interesting that try_read() does not document the same panicking behaviour nor does the TryLockError enum include a variant for WouldHavePanicked.