Gotham web framework and readers-writer lock

I am trying to design a simple web-service using Gotham for self-education. The service is just a wrapper providing read-write access to some InternalState struct. I am using stateful handler approach, my StatefulHanlder struct implement both gotham::handler::Handler and gotham::handler::NewHandler.
Now I have only one instance of InternalState, so I need to put it behind readers-writer lock.

  • Naive implementation was to have internal_state: Arc<RwLock<InternalState>> field in the StatefulHanlder, but both read() and write() methods of RwLock are blocking.

  • I came across qutex crate that seems to be exactly what I need. However, it doesn't play nice with RefUnwindSafe required by NewHandler:
    the type std::cell::UnsafeCell<std::option::Option<qutex::QrwRequest>> may contain interior mutability and a reference may not be safely transferrable across a catch_unwind boundary

Does that mean I must manually implement an async wrapper above RwLock? If so, how can do that in non-blocking and efficient manner?

Thanks!

Why do you want to avoid a Mutex or RwLock? It may block threads if it’s highly contended, yes, but you’d need massive concurrent requests, long lock hold times, and the server not doing much else outside the lock (which might be true in your simple example but not necessarily in a “real” application). Or is this just a “how would I do this without locks” educational question?

If you want to go the async route, one option might be to create a worker thread dedicated to owning the InternalState struct. Connect to this thread with futures channels, and then send it messages (read/write) from the request handlers. The worker thread will have a separate channel over which request handlers send their own sending half of the channel and it responds back with the sending part the handler can use to send it messages - this serves as a registration mechanism by which handlers connect to this thread. This is more involved than just sticking with a mutex so you have to make sure it’s worth it.

Also, even if you had a “queued” mutex, at some point you’d want to apply backpressure so new requests don’t continue piling up if the mutex isn’t being freed up in a timely manner. If you have a server that’s doing long running work under the lock and you have high concurrency, then you’ll hit this backpressure at some point and block.

Finally, depending on what type of state InternalState has, you can make use of atomic types to update state rather than mutex.

1 Like

Thank you, you comment is very insightful, as always.

Or is this just a “how would I do this without locks” educational question?

It is even less than that, something along "how do I do it with a coarse global lock". The only non-trivial point was staying async and try not to poison the threadpool.

A full-blown production application must ensure fair request ordering, deal with lock poisoning, etc., but firstly is must use proper database backend. Unless the DB backend provides an async interface, the threads will be blocked, and a good async DB backend will probably use some variation of the technique you described.

All-in-all after looking at Rust landscape, I feel that other parts of the ecosystem have yet to catch up with Gotham when it comes to async operations. I think I am going to leave this test project for now and revisit it some time later.

There's a lot of flux with the async story in Rust right now, with some interesting (and needed!) changes coming this year. I suspect once they land and the design space of futures, tokio, async/await, etc settles down, there will be more traction on the async side of things.

That said, you can always use the classic approach of hosting blocking I/O operations on a dedicated threadpool. This is pretty much a must for filesystem I/O as there's no good cross-platform async file I/O. So you could perform synchronous DB operations on this pool, and communicate results as futures - that should integrate with Gotham's async story.