I implemented a distributed lock based on ZooKeeper and I want to test cancellation of the distributed lock. This causes a dead lock.
Here is code using AtomicUsize
to simulate the ZooKeeper distributed lock (I chose AtomicUsize
just because it's easy to share between thread, not sure if I using it in a preferred way).
Code updated: 2019-08-22
playground: AtomicUsize
simulation
use std::{
sync::{
atomic::{self, Ordering},
Arc, Mutex,
},
thread,
};
fn main() {
// use AtomicUsize simulate zookeeper distrute lock;
// acting as a remote lock mechanism
let counter = Arc::new(atomic::AtomicUsize::new(1));
let lock1 = DistrLock::new(counter.clone());
let lock1_handle = Arc::new(Mutex::new(Box::new(lock1)));
let counter_thread = counter.clone();
let lock1_handle_thread = lock1_handle.clone();
// simulate another machine use distribute lock
let thandle = thread::spawn(move || {
let mut lock2 = DistrLock::new(counter_thread);
println!("lock 1");
lock2.acquire().expect("debug thread1 1"); // distribute lock acquired
thread::sleep_ms(10_000);
println!("lock2 release");
lock2.release().expect("debug thread1 2");
});
// second thread try cancelling lock1
let thandle = thread::spawn(move || {
thread::sleep_ms(2000); // lock1 mutex lock acquired in main thread wait for distribute lock
println!("lock 3");
lock1_handle_thread
.lock() // wait for lock1 mutex release, dead lock because lock1 wait for distribute lock2 to release
.expect("debug thread2 1")
.cancel() // test cancel for distribute lock
.expect("debug thread2 2");
println!("lock 4"); // can not reach until lock 2 release, means can't cancel lock1 until lock1 get the distribute lock
});
thread::sleep_ms(1000);
println!("lock 2");
lock1_handle
.lock() // mutex lock acquired
.expect("debug main 1")
.acquire() // wait for distribute lock in_lock_2 to release
.expect("debug main 1.1");
lock1_handle
.lock()
.expect("debug main 2")
.release()
.expect("debug main 2.1");
thandle.join().expect("debug main 3");
}
struct DistrLock {
counter: Arc<atomic::AtomicUsize>,
canceled: bool,
}
impl DistrLock {
pub fn new(counter: Arc<atomic::AtomicUsize>) -> DistrLock {
let lock = DistrLock {
counter: counter,
canceled: false,
};
lock
}
pub fn acquire(&mut self) -> Result<bool, ()> {
loop {
if self.canceled {
return Ok(false);
}
if self.counter.load(Ordering::SeqCst) < 1 {
thread::sleep_ms(500);
continue;
} else {
// self.counter.store(self.counter.load(Ordering::Relaxed) - 1, Ordering::Relaxed);
self.counter
.store(self.counter.load(Ordering::SeqCst) - 1, Ordering::SeqCst);
return Ok(true);
}
}
}
pub fn release(&mut self) -> Result<(), ()> {
// self.counter.store(self.counter.load(Ordering::Relaxed) + 1, Ordering::Relaxed);
self.counter
.store(self.counter.load(Ordering::SeqCst) + 1, Ordering::SeqCst);
Ok(())
}
pub fn cancel(&mut self) -> Result<(), ()> {
self.canceled = true;
Ok(())
}
}
There would not be any problem implementing this in Python or C++ because I would not need a mutex to protect ZkDistrLock
, I could just call cancel
in a second thread.
But in Rust, if I lose the Mutex
, the compiler will complain that the use of lock1
is not thread safe.
Is there some way I can lose the Mutex
or is there a different idiom that I can use?
My original implementation in the playground. The playground includes the distributed lock implementation but not all code here, can not compile