Choosing rt-multi-thread versus rt option in tokio

Some of the recent discussion here caused me to think about which version of the tokio runtime is likely better for a typical use of my binary crate, a webserver, something I never really gave much consideration to before.

I suppose to some extent the answer is "it really doesn't matter, either is going to work absolutely fine", and that is what I see in practice. But still... what considerations should go into the choice?

I suspect the single-threaded option may be the better choice, as it is unlikely multi-threading is required to handle the async load (which is expected to be small compared to the sync load), and it is better to leave any extra processors to handle the sync processing in peace. Is this reasoning correct?

[ I suppose I could try and do some testing to see which works better, but maybe this isn't really needed. Has anyone done some measurements? ]

Yep, that would be my response!

The multi-threaded runtime is a good default. In the semi-rare case that you need !Send tasks or tasks that borrow across await points, those parts of your code can be encapsulated to a single thread with LocalSet and FuturesUnordered and friends.

I suppose I would summarize this as: it is easy to constrain the multi-threaded runtime to run tasks within a single thread. But comparatively more work to expand the single-threaded runtime to give you work-stealing parallelism when you need it.

Or if you seriously just want a single-threaded runtime for any reason (maybe it's an embedded environment, or you are just dead-set on using !Send types like Rc and RefCell everywhere) then using a single-threaded runtime is also a good choice. "It depends. :slight_smile:"


Well without thinking about it at all, I originally choose rt-multi-thread, maybe because it seemed cool...! And it has always worked fine. Then today, I was wondering if actually single thread actually made more sense, as (without measuring it) my guess is the CPU required to process a typical request is maybe 10% async load and 90% sync, and it could be easily be more lop-sided than that (for a database query that actually has some serious work to do). I mean parsing http headers, doing a tiny bit of processing to prevent denial of service attacks, writing a http response, it doesn't involve any major computation at all.

You could measure the difference, and see if the overhead of being multithreaded is significant: Builder::worker_threads (or the environment variable TOKIO_WORKER_THREADS) allows you to configure a multithreaded runtime to only use a single thread, and you can compare benchmark results between that and a current thread runtime. That way, you can isolate out the difference between the two runtimes, without the single thread runtime being at a disadvantage because it only uses one thread.

A multi-threaded runtime can often achieve lower tail latencies in web server environments, because work stealing avoids tasks being blocked by the occasional task that hugs the CPU.

A current-thread runtime will not spawn any threads, so it will consume often consume fewer resources. (Especially RAM.)

The current-thread runtime will almost certainly win compared to a multi-thread runtime with a single thread.


When I've benchmarked my projects, this turns out not to be true - the two runtimes are so close to each other that I cannot measure a difference between multithreaded with one thread and current thread, and multithreaded wins with two or more threads.

Microbenchmarks do, of course, show that the current thread runtime is lower overhead, but that's lost in the noise of my real work.

1 Like

Hmm. I guess the main difference is what happens when the list of ready-to-run tasks becomes larger than 256. At that point, the multi-thread runtime cannot fit all of the tasks in the local run queue anymore.