Difficulty understanding Futures/Async

I'm learning to use hyper which uses futures to allow async I/O. I want to embrace that concept, but am having a hard time understanding how to work with it, or what its implications are. I'm not even entirely sure how to express my confusion, so bear with me.

Hyper's Service trait has a function call that handles HTTP requests and returns a future. If one of these calls triggered a computation that's either expensive or high latency, do I understand it right that this would block the whole server? In other words, do I need to package time-intensive, nested computations into some iterative logic (implementing the Future trait) and return Async::NotReady, and the event loop will automagically revisit these nested futures until they've completed?

I read a lot of documentation on this, so I think a dialog with another human is my last hope.

I'm not a Hyper expert (far from it, actually), but I believe you're correct - your Service impl's call will be called on the event loop, which is shared with other services and general I/O.

If you have an expensive computation to perform inside call, consider using futures_cpupool - Rust to offload the computation to a background thread. That computation will be wrapped in a Future. You would then chain a continuation future onto that one, which will deal with the result of the computation and send it back out in the HTTP response (presumably). The chained future is what you'd return back to Hyper from the call function. Hyper will then register that Future with the event loop.

3 Likes

So to get a very simplified picture:

Req #1 -> Service::call() -> future(data)
           -> hyper_responds_to_client()
Req #2 -> Service::call() -> CpuPool::spawn(long_computation()) -> future(not_ready)
           -> hyper_queues_for_later_polling()
Req #3 -> Service::call() -> future(data)
            -> hyper_responds_to_client()
Req #4 -> Service::call() -> future(data)
           -> hyper_responds_to_client()
| CpuPool somehow registers tasks completion
| Hyper then polls future inside Req #2 call, now deemed ready -> future(data)
           -> hyper_responds_to_client()
Req #5 -> Service::call() -> future(data)
           -> hyper_responds_to_client()
Req #6 -> ...

Is that roughly the right idea? I've done some basic work with threads and locks in C++ and retained a sense of synchronous progression (managing spawning and joining), but this async API does my head in :sweat:, elegant as it may be.

I think that's roughly the right idea, but someone like @seanmonstar will know for sure :slight_smile:.

Note though that when you say Service::call() -> future(data), it doesn't necessarily mean that Hyper responds to the client right away - the future has to complete before that happens. However, you can return an already complete future, which is done via futures::future::ok(...) (as seen in Hyper's examples) - you can similarly return an immediately resolved future that has an error via futures::future::err(...).

Also note that the Future you return back to Hyper from call may be a chain of Futures internally, and the futures in the chain run as each preceding future resolves (with an ok or err result, depending on how you construct the chain).

1 Like

You are correct that the futures are all polled on the same thread, so if the poll function of some future is going to take a long time to return, it will affect all futures (so, responses) on that thread. If you have something that needs a lot of CPU time, you might want to start up a CpuPool to enqueue work on.

The Future you return in your Service does not have to always be the same. Your service may find that some requests can be answered immediately, and others require more work from the server. You can return either a custom Future that is an enum of the various sub-futures needed to return a Response, or you could make use of trait objects, returning a Box<Future<Item=Response, Error=hyper::Error>>.

2 Likes

Thanks guys, imagining the futures continuing outside Service::call and not necessarily being resolved at that point helps me think about this. Slowly making progress.