Try_lock on futures::lock⁠::Mutex outside of async?

It never gets unlocked, you have to drop the Reader.
You can add a method for that. With this particular Mutex impl you would not have to make a third state, since MutexLockFuture doesn't bother other contenders until first polled, so it's okay to construct one.
But the enum has the lifetime from SmolStackWithDevice, so I didn't intend it to be long-lived.
Funny how SmolSocket can actually be dropped, I didn't notice I did that.

l is a &mut futures::lock::MutexGuard<'_, SmolStackWithDevice<'_>>
*l is a futures::lock::MutexGuard<'_, SmolStackWithDevice<'_>> (we couldn't actually move this out and assign it to a local, but that's what it is at the type level)
**l is a SmolStackWithDevice<'_> because MutexGuard impl Deref (-//-)
&mut **l is a &mut SmolStackWithDevice<'_> because MutexGuard impl DerefMut

safe Pin::new(&mut **l) is able to make a Pin out of the mutable reference on the stack because SmolStackWithDevice is Unpin (doesn't care about being moved inbetween polls)

I've been thinking about your solution. My idea was to lock the socket every time a poll_read was called. In your case, it calls poll_read on a locked socket (I guess).

It looks like after the first poll_read, the socket would get into a locked state forever. This would have the implication that the socket would be locked until the polling completes. For a large file download, poll_read would be called for minutes on the same dyn AsyncRead + AsyncWrite, so the ip stack would be locked for minutes.

So if I pass Reader to hyper (which expects a socket that implements AsyncRead and AsyncWrite) like seen here: rust_hyper_custom_transporter/custom_req.rs at master · lzunsec/rust_hyper_custom_transporter · GitHub, then hyper would block this socket and thus block the ip stack for the entire operation, which could take minutes, which is undesirable.

I guess it's possible to change from Reader::Locked to Reader::Locking on every poll_read but would it be feasible? What you think?

Is it meaningful to unlock the mutex? Don't you risk interleaving messages from two different parts of the system? If it is meaningful, why not just use a std mutex?

1 Like

hyper expects the socket to represent a TCP tunnel/connection from a particular server/client, which means that all reads over the lifetime of the connection (potentially multiple HTTP 1.1 request-responses with Keep-Alive) will be consumed by hyper, without ever yielding the socket back.
HTTP 2 does its own multiplexing of chunks within the connection, but still consumes the entire connection.
HTTP 3 runs over "connectionless" UDP.

But in all cases, you have to filter/route packets at SocketAddr (IP + port) level before giving them to hyper.

The problem is that I can't use std::mutex inside AsyncRead/AsyncWrite as it would block tokio's poller. If it were possible my problems would be over.

I can unlock the mutex because the IP stack will simply accumulate packets for the socket until the socket calls on it for reading. And the stack has a queue for each socket.

indeed. hyper would consume the socket, but it would poll it constantly. I'd like to block the IP stack only when polling happens, but leave it free when no polling is happening, so other sockets can also be polled. The fact that my socket and the stack both share the FIFO of packets, means that when an AsyncRead/AsyncWrite poll happens on the socket, the stack would be blocked.

This isn't true. As long as you release it again before you return from your poll method, it is allowed. If you check out the documentation for Tokio's Mutex, you will see that it agrees with me, and the shared state chapter in the official Tokio tutorial uses an std Mutex. As an example that does this, check out the source code for tokio::sync::Notify.

Non-async mutexes are OK in async code as long as they are only locked for a short time, because if they are only locked for a short time, then they only block the thread for a short time, which is ok.

The main possible problem with doing this is that if two threads try using your AsyncRead/AsyncWrite at the same time, you can risk interleaved messages from the two, and you also risk wakeups going to only one of them, because poll_read will only send wakeups to the most recent caller.

Sounds like you can just put each individual queue in an std or parking_lot mutex.
You might later be able to replace them with some kind of lockfree ringbuffer if you only have one producer and one consumer.

Just remember: the most straightforward way to avoid deadlocks with a sync Mutex is to never hold a lock across an await.
This way it becomes impossible for a task which would be scheduled after you on the same thread to be holding the lock (because only currently executing tasks hold locks, not suspended ones).

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.