I’m working on an example web application in
iron for a workshop next week. There are plenty of sample APIs around for demo purposes, but I’m trying to break out of that demo mindset and build the app I would if a client were frisbeeing briefcases of cold hard cash for it.
So I’m at a point where I have a mechanism for sending a request on a queue from my
iron handler to one of many background workers with a future to complete with the response. Which is great. But out-of-the-box Rust (for good reasons) only gives us OS threads, and while my worker is churning away handling some command, that request thread
hyper has spun up isn’t doing anything useful, like handling other requests. It’s just blocked waiting for the future to complete. That starts to make this background worker abstraction kind of pointless.
I can see this not being an issue in a
tokio world, where I’d assume other services get CPU time while another is blocked (correct me if I’m wrong). So just because request
x is waiting for Postgres doesn’t mean request
y has to wait for request
x to finish with the handler thread.
So I guess I have three questions:
- Is it the case when
ironprovides its pipeline as one too, that a single request won’t need exclusive ownership of a thread for its lifetime?
- Is this actually an issue with the current state of the world (
iron)? I have n handler threads so can handle n concurrent requests (but many more active connections with
rotordoing its own io stuff). So should I just treat that handler as the place for doing my work and not waste time with background workers. Connections could easily use something like
r2d2for pooling, so they can be shared across requests.
- Am I completely misguided about how this works and need to be schooled?