When using a tokio/futures backed webserver, how to manage threads?


Imagine that I have a webserver that does the following:

  1. Receives a GET Request with an ID.

  2. Based on that ID, the server will query two data sources at the same time:
    a. An external Webserver via REST
    b. A Database

  3. Once those calls are both completed, it will combine the results and do some moderate/light processing (nothing too intensive).

  4. It will return a result consisting of the processed data.

When programming a NodeJs WebServer, I only have access to a single thread of execution, so it is very easy for me to reason that it all of the above will occur in one thread. In exchange for this simplicity, I miss out on multicore processing.

However, I am curious how the above workflow would play out in Rust and Tokio.

If I am not mistaken, a Tokio server (possibly Hyper) will use async IO to receive all requests on a single thread. If I were performing the workflow I mentioned above, would it make sense (or is it even possible) to somehow share the same Async IO thread when contacting the webserver and database? Once those calls are made, would it make sense to perform the moderate/light processing on that same thread?

If the above is true, how would I be able to leverage more than one CPU core?

If not, would it make sense to have a single, separate running async IO thread for each possible outgoing call? Even if the last processing step is not heavy, would it make sense to send it to a threadpool?

I know that this is a abstract question, but I would really appreciate it if someone could clarify how to leverage multi processing in addition to Tokio.

Thank you very much.


One of the great parts about futures is that this is actually quite easy to model, it’s simply a Future::join away. Basically what you’d do is model the two operations as a future, you’d call join to wait for both futures to complete, and then afterwards you’d likely use map or and_then to do your processing on the returned data.

Note that you can get concurrency here in terms of executing both requests at the same time without requiring parallelism (literally happening at the same time vs just “cooperatively at the same time”). In that sense whether or not you use multiple cores is up to you. This is likely a task that’s I/O bound (waiting on the upstream server or database) so you wouldn’t need a CPU thread pool, but if one were needed then you could certainly use the futures-cpupool crate which would work the same as all other kinds of futures (you’d just call join)


Thanks for the response. If we were to do everything in a single thread, then on a multicore setup, would it make sense to run a Rust tokio instance per core?


Yeah that’s currently what some other benchmarks are doing (using SO_REUSEPORT to spin up a listener per core) and if that works for you then I’d recommend that.