Given that tokio::sync::RwLock is built on a FIFO and writers can't be starved by readers, what's the advantage of using a manager task listening to an mpsc::Receiver to protect/manage data shared between threads instead of using a tokio RwLock? The manager pattern is proposed by the tokio tutorial: Channels | Tokio - An asynchronous Rust runtime
Since by design the manager only fields one request at a time, you'll end up with worse queuing on the manager for read-heavy workloads than on the RwLock. For both cases, writers block readers by design. In the manager design, readers will also block other readers. The RwLock will allow concurrent readers.
So what kind of implementations call for this kind of message passing rather than using an RwLock? Are RwLocks just slow?
The disadvantages of the manager pattern I see are:
No parallelization of reads
Single-threaded performance for read-heavy workloads
Bunch of extra code
The two potential advantages I see are:
pipelining, but I would posit that if pipelining gives you an advantage over an RwLock, you're locking too much data at once. That is to say you can get the same pipelining benefit using an RwLock with properly structured data.
"open loop" message passing, where you don't need a response and therefore aren't blocked by writes getting stuck in a queue.
Well, in this example, Client::get still requires &mut, so there's no read parallelism available.
A major advantage of using a channel is that you can't hold the lock longer than you need to. Using locks means you can get deadlocks from awaiting some other task which also wants to grab the lock; this can't happen if the subsystem only communicates via channel.
If it's important to be able to serve multiple requests at a time, you can use an mpmc channel to multiple servicing tasks based on how many can be in flight at a time. You might also be able to use mpsc and have the one listening channel spawn tasks to service requests; this might even be beneficial in the case where you sometimes need to get unique access to serve a request.
Generally the actor pattern is useful for managing shared access to IO resources. Here, you often need some sort of open loop processing, or you might also want operations to be triggered by e.g. timers for things like a ping. Similarly, the background task managing it can also handle reading from the IO resource.
Thanks for the responses. This seems like the biggest structural advantage:
You can also prevent deadlocks by copying or cloning needed data out of the RwLock instead of holding a reference, right? To return data through a channel, it has to be Send, which means it's either a reference to something that's Sync, or you have to copy/clone the underlying data. Which are the same ways you'd pull data out of an RwLock to prevent deadlocks
The more i think about this, the more it seems like it's "just" a mental offload for the developer so you don't have to think about things like deadlocks. That's not a trivial advantage, but the price seems high in terms of additional code, indirection, and potentially performance
Using a handoff channel should have the same performance characteristics as a Mutex. Tokio doesn't provide an actual handoff channel, but a capacity-1 channel is almost the same. The only cost over a RwLock should be the one extra copy involved (both ways). In an async application, the costs of doing the async stuff very much dominates that cost.