Mutex<Receiver> and recv()


I am reading the book, I am currently stuck at chapter 20.2 Turning Our Single-Threaded Server into a Multithreaded Server. There is the following line of code in listing 20-20 that I don't fully understand:

let job = receiver.lock().unwrap().recv().unwrap();

In particular:

  1. How is the lock being released when the call to recv() is made?
  2. If a channel is thread-safe, why is it wrapped into a Mutex and not directly into Arc?

Mutex::lock returns a "guard" value. This guard is what gives you access to the contents of the mutex. The guard also keeps the mutex locked as long as it still exists; once the guard is dropped, the mutex is automatically unlocked.

Although it's not exactly the same, that single line is roughly equivalent to:

let job = {
    let guard = receiver.lock().unwrap(); // lock the mutex
    let result = guard.recv().unwrap();
    drop(guard); // automatically unlocks the mutex

The point of this arrangement is that you cannot access the value protected by the mutex without locking it to obtain a guard, as long as you hold the guard no one else can access it, and as soon as you drop the guard the mutex is unlocked.

I have no idea. The signature of recv is:

fn recv(&self) -> Result<T, RecvError>

The &self means that you do not need exclusive access to a Receiver to call recv. Also, I went back and checked, and this has been the case since Rust 1.0.

I can think of two possibilities:

  1. There is some unstated problem with accessing the...

wait a second. "unstated"? I don't think I actually checked... better make sure.

*goes back to the docs*

Oh hell.

Nevermind, found it. From the Receiver documentation, right at the top:

(Emphasis mine.)

And, if we scroll down to check the implementations, we find:

impl<T: Send> Send for Receiver<T>
impl<T> !Sync for Receiver<T>

The issue here is that, whatever the signature of recv says, the type is Send, but !Sync, meaning that although it can be sent to another thread, it cannot be safely accessed from multiple threads without synchronisation.

This synchronisation is precisely what Mutex provides.

And that, dear reader, is why you should always check the docs. And double-check anything you're saying to make sure you're not about to make an ass out of yourself on the internet. :slight_smile:

(To be clear: I'm talking about myself. I really did almost post misinformation!)


FYI, looks like you’ve just re-discovered the existence of the “sc” in “mpsc”. It was there all along, right in the name of the module :wink:

There’s also more explanation/elaboration in that top-level module’s documentation. Though I know from experience that documentation on the containing module is easy to be missed. (E.g. who here has ever read the module-docs for vec? Or for collections? The ones for pin are great as well, though those at least are linked from the docs of Pin<T>, too.)

Multi-producer, single-consumer FIFO queue communication primitives.
A Sender or SyncSender is used to send data to a Receiver. Both senders are clone-able (multi-producer) such that many threads can send simultaneously to one receiver (single-consumer).

(emphasis, mine)

Funnily enough, mpsc does use an implementation nowadays that can support multiple receivers. As far as I’m aware, it’s essentially a clone of crossbeam::channel at the moment, except that crossbeam does offer the multiple-receiver functionality.

The only advantage of single-receiver is that one cannot accidentally incorrectly assume that messages would be somehow broadcast to all receivers. On that note then, using a mutex to call recv should probably behave quite similar to the multiple-receiver situation, anyways: One thread waits on the receiver, the other on the mutex, but for a mpmc channel, there wouldn’t be any less waiting, anyways. Perhaps some better efficiency though (if that even matters); I don’t know :innocent:


Depends on the queue implementation. Some let multiple threads consume items concurrently.

flume and async-channel implement MPMC channels, and the flume authors have benchmarks that show it outperforming crossbeam in certain contexts.

1 Like

In my (partial) defense, I was aware of that. My assumption was that "single consumer" just meant that a message could only be delivered to one endpoint (i.e. the actual value that represents the endpoint). The &self made me assume it was doing some form of locking internally, and thus, the use of Mutex didn't make sense. I didn't twig that multiple threads accessing the same endpoint were each considered separate consumers from the channel's perspective.

Still wrong, but only because I was drawing a conceptual line in the wrong place.

Now, the time I put an absolutely hideous race condition involving RwLocks into a codebase... that was plain old stupidity. :slight_smile:

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.