std::sync::Mutex vs futures::lock::Mutex vs futures_lock::Mutex for async

Which Mutex should I use when spawning 2 futures in async code? It seems like futures::lock::Mutex is the way to go here given the documentation; however, I also found futures_lock::Mutex which might just be an alternate implementation. Is using std::sync::Mutex always wrong? What about std::sync::RwLock? I don't see an equivalent in futures::lock... what should I do for something that's mostly reads, but the occasional update/write from another future?

Finally, how do you even know when you should use a Mutex when writing async code? Will the compiler always and correctly complain if I attempt to mutate a variable across futures? Is a good rule-of-thumb to simply not use a Mutex and wait for the compiler to complain, then add them as needed?

Sorry for all the questions here... just seems like guidance around this is lacking, or I'm terrible at Google :expressionless:


In async code you never want to use a std::sync::Mutex or std::sync::RwLock. These will block the current thread instead of yielding the execution of the current future, so you'll cause the async runtime to wait until it can take the lock. At best (e.g. in a multi-threaded runtime) this will just cause performance degradation, but may also trigger a deadlock/resource starvation.

EDIT: as @alice pointed out, it's okay to use a synchronous lock if it won't be held across a yield point (i.e. let guard = mutex.lock(); do_something_async().await; use_locked_item(&guard)). I'd still prefer to use a futures::lock::Mutex because then you don't need to worry about accidental deadlocks when future you moves some lines around 6 months from now.

I'd say the rule of thumb is to use a Mutex when multiple components (futures, objects, whatever) will need to mutate the same data concurrently. This is almost identical to when you'd use a std::sync::Mutex in synchronous code, you just need a Mutex type that is aware of the environment it's running in so it uses the right mechanism when waiting until the lock is available.

For example, say you've got a common counter which gets incremented/decremented by half a dozen tasks that you've spawn()ed onto the runtime. You'd wrap the counter in a futures::lock::Mutex to make sure only one task can mutate the counter at a time, and then pass a reference to this Mutex<u32> to each task (either with a & or maybe by putting the mutex behind a reference-counted pointer like Arc) so they can do their thing.

Generally I find that this is not true. Using a standard library mutex is perfectly fine when the duration of any locking on the mutex is very short. In particular the lock cannot be held across an .await.

For example, in the mini-redis example, which the Tokio team prepared as an example of idiomatic asynchronous Rust, they use a mutex from the standard library as described here. If you look in the source for, it contains this comment:

A Tokio mutex is mostly intended to be used when locks need to be held across .await yield points. All other cases are usually best served by a std mutex. If the critical section does not include any async operations but is long (CPU intensive or performing blocking operations), then the entire operation, including waiting for the mutex, is considered a "blocking" operation and spawn_blocking should be used.

This paragraph talks about Tokio's mutex, but the same idea applies to all asyncronous mutexes. Generally I also recommend this paragraph, which explains what blocking means in the context of async/await, and when locking a mutex is considered blocking.


You need a mutex when you want to modify some value from multiple independently spawned tasks. If the code compiles without a mutex, then it should be good unless you break something with unsafe code. You should generally use the standard library mutex if the duration of any lock is very short and never held across an .await, and the async mutex if you sometimes need to hold it for a long time, or if the lock is ever held across an .await.

Check out the documentation for Tokio's mutex:

There are some situations where you should prefer the mutex from the standard library. Generally this is the case if:

  1. The lock does not need to be held across await points.
  2. The duration of any single lock is near-instant.

On the other hand, the Tokio mutex is for the situation where the lock needs to be held for longer periods of time, or across await points.

This should apply equally for all asynchronous mutexes.

There might not be an RwLock in the futures crate, but both Tokio and async-std provide one. You can find the doc for Tokio's here.

Unlike mutexes, I recommend using asynchronous rw-locks for all asynchronous applications, because even if each individual lock is held for only a very short time, many read locks might keep a write lock for a long time, which is not a good idea. The std RwLock documentation says this:

The priority policy of the lock is dependent on the underlying operating system's implementation, and this type does not guarantee that any particular policy will be used.

If the priority was write-preferring, my opinion on this would be different. Note that Tokio's mutex uses this priority:

The priority policy of Tokio's read-write lock is fair (or write-preferring), in order to ensure that readers cannot starve writers. Fairness is ensured using a first-in, first-out queue for the tasks awaiting the lock; if a task that wishes to acquire the write lock is at the head of the queue, read locks will not be given out until the write lock has been released. This is in contrast to the Rust standard library's std::sync::RwLock , where the priority policy is dependent on the operating system's implementation.

As far as I know, the async-std RwLock does not guarantee a fair priority, but it is still better than the standard library's lock, as the thread is not blocked while the task waits for the lock.


In many cases the compiler will catch it because the standard library's mutex guard is not Send, so if it is kept across an .await, you can no longer spawn that future.

Unfortunately this doesn't catch uses in .block_on, because it doesn't require send, so it is not perfect.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.