RwLock.write() freezes even though there are no other write locks

Hello! I am pretty new to Rust.

I have code in which closures and a state (Just a struct) must be shared with 'other threads'. Most often it will be the same thread, just in a different context with closures.
I've read online that the RwLock is the type that I need, given that Mutexes result in deadlocks: I only need one writer, but multiple readers.

Strangely enough, I get a deadlock in my situation as well. It seems totally impossible to me: I've checked in the debugger when the RwLock.write() lock is obtained, it happens only twice:

  • Once before the run() function is called. The lock leaves the scope early, so it would be free for others.
  • The second time when a read lock is already obtained via the run() function. Obtaining the write lock (in the 'Click' lambda) will cause a deadlock even while there are no other write locks!

I can't explain what's going in. I've checked the local Rust toolchain code (Rust internals) via the debugger, and the first caller leaves the write() and write_unlock() functions as it should. The second caller locks up for some reason.
There's a simple example online on how to use RwLocks. That example does work on the playground, but I can't see why that one will and my code won't. The only real difference would be that it uses real threads instead of closures.

Playground of 'lockup' code - Stuff gets interesting starting at line 91. All of the above is sort of slimmed down code for the types that I am using. Line 119 will cause the lockup.
Playground of working sample.

Can anyone tell me what is going on here? I've written multi-threaded code for much longer but Rust is very stubborn on that subject! (Especially with sharing data)

Perhaps I just need to use Crossbeam or something, but I think there should be no problem with this code in vanilla Rust.

Thanks a ton in advance :grinning:

Isn't your code keeping the read lock guard somewhere? If so, it's documented behavior.

Locks this rwlock with exclusive write access, blocking the current thread until it can be acquired.

This function will not return while other writers or other readers currently have access to the lock.

Returns an RAII guard which will drop the write access of this rwlock when dropped.

A write lock should be exclusive lock, otherwise it cannot yield the &mut T of the inner type. If the read lock guard and the write lock guard can coexist, it can be possible that some thread can write to the value while other thread read the same value, which is a situation called data race.

Oh goody, I thought the documentation meant "Multiple readers, and/or one writer"
What you're saying would explain the problem...

I need to check on how to solve my problem based on the code in the other playground link, or some other posts on here. Thanks!

RwLocks are just as prone to deadlocks as mutexes. In fact, your question here is about a RwLock deadlocking.

Don't use RwLock just for that reason.

I know, but I misinterpreted the documentation as 'Multiple readers and one writer', which would be perfect for my situation. I therefore thought that this deadlock was a bug in Rust.

There is no type that allows one writer + one or more readers, since this would violate one of the basic guarantees of Rust.

Depending on what you want to achieve, you might want to use a higher level primitive than a Mutex or RwLock (like channels for example). What are you trying to achieve?

Depending on what you want to achieve, you might want to use a higher level primitive than a Mutex or RwLock (like channels for example

I've looked into channels, but this code is mainly single-threaded. I don't think channels fit here because I don't really need inter-thread-communication, I only need to edit some variables in a global state.

I am using ImGui to create a GUI, and I have a struct which holds the state of the application.
I want to create a mini-MVVM framework for it so controlling what the everything on the screen does becomes easier.

This means that the event loop of the application must have full access to the state & GUI. Since the MvvmWindow's components hold lambdas or pointers to variables (A sort of viewmodel, they will often point to the state), mutable access to the state is also required through those lambdas. I did not post it in my playground example, but the 'click' handler is supposed to change the 'clicked' bool in the state. The MvvmWindow struct will construct the UI based by calling the render() function on every component in it.
The button's render function looks like this, as an example:

 fn render(&self, ui: &Ui) {
     let value = self.text.get_value(); // Execute lambda to retrieve the label for the button
     if ui.button(&ImString::new(value), [0.0, 0.0]) {

Rust's strict borrowing mechanism makes it very hard to do something like this. Since I am new to Rust, it is probably that I make structural mistakes because I don't know what best practice is in Rust. In C++ and friends you would just pass around a pointer carelessly. In Rust it doesn't really work that way.

I think that it would be best to keep the MvvmWindow struct out of the state struct. It doesn't really belong there, and I believe that it would fix this problem.

If you wish, I could post a link to the full code. It isn't as neat as I want it to be however!

One way to handle this sort of thing is a double-buffering approach, where everything is rendering from the current frame's state but updating the next frame's state. If you use copy-on-write, there's very little cost for the majority of frames where nothing changes. Some (untested) pseudocode for that might look something like this:

static Mutex<Arc<State>> STATE;

// Copy-on-write update of pending state
fn update(xact: impl FnOnce(&mut State)) -> Result<(), ...> {
    xact(Arc::make_mut( STATE.lock()?.deref_mut() ));

fn win_render(...) {
    // Get a reference to the current state to use during rendering
    let state = Arc::clone( STATE.lock()?.deref() );  // lock released at end of statement
    for c in components {
        c.render(&state, ...);

// Button
fn render(&self, state: &State, ui: &ui) {
    let value = self.text.get_value(state);
    if button_clicked() {
        // closure operates on the next frame's state

If you ever need it, moving the state update out of the UI thread to a worker thread is then relatively straightforward: the update() function queues requests for the worker thread instead of running them directly.

Thanks for your suggestion! I will look if that is do-able. I need to see whether I can store all call-backs that must be executed if a button was pressed.

I am no GUI expert but it seems to me that almost always when one has a GUI one also ends up with what we may call threads/tasks/whatever.

GUIs I have used, like Qt have an "event loop" running around and around that detects events from keyboards, mice etc and triggers bits of code in your application with those events to do something and update the GUI.

That GUI event loop should never be stalled by any long running processing or waiting for I/O. Else the user loses control. So for example waiting on reading/writing a big file potentially hangs the GUI for the duration. Not good.

What you need there then is some kind of concurrent execution, the GUI and whatever action is going on in your application.

To this end things like Qt provide all kinds of API's to disk access, networking etc, that integrate with the Qt event system and effectively can run as background tasks whilst the GUI remains responsive.

I have no idea what your GUI looks like but this all suggests it should run in it's own thread. Or perhaps a tokio asyc task. Whenever application code running in another thread/tasks.

Ether way on can then communicate events between the two with mpsc channels. And share chunks of "global" state through atomic shared pointers, Arc.

Global in quotes there because I suspect you don't actually want C/C++ style global chaos and Rust will make it hard to do.

This has the great bonus that it decouples your application code from your GUI code.

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.