Rust threading guidelines

I'm looking for a writeup that provides guidelines for using Rust threads. I'd like to find answers to questions like these that I imagine most people new to Rust would have once they arrive at the need to use threads.

  1. I see that I can use threads in Rust without using async_std or tokio. What are the needs of a program that would cause me to want to use those instead of just using what is built into Rust? Is it mostly the desire to use green threads (M:N) instead of native threads (1:1)?

  2. Is it a problem if I call std::thread::spawn many more times than I have cores? It doesn't seem to be an issue. I tried it with 200 calls and all of them ran fine.

  3. The only runtimes I have encountered so far are async_std and tokio. Perhaps there are other lesser known options. Is it correct to say that currently for all practical purposes those are the only options?

  4. I get the impression that tokio has more features than async_std. What are some guidelines for choosing between them or features that are present in one and not in the other?

Have you seen a writeup that addresses these questions?

The usual pithy answer is that threads are for doing things in parallel whereas async.await is for waiting in parallel.

The resource usage of threads means that dozens of them are fine, but tens of thousands of them add up unacceptably.

2 Likes

There's also rayon and crossbeam, which are abstractions over traditional threads rather than the green threads you get with async. They're designed for a very different workload so they aren't really competing with async_std or tokio, but they're important parts of the Rust threading ecosystem.

6 Likes

Question 1. Using async Rust can provide better performance for large numbers of tasks, and avoids issues with upper limits on threads. Generally using async/await is a good for for programs that spend all their time waiting for IO, and not for programs that spend all their time computing stuff (for those you would use rayon). Another advantage is that starting a new thread takes up a few megabytes of memory, whereas an async task usually takes much less.

Question 2. If you are doing IO, it's fine. If you are doing computations, it's more performant to match it to number of cores. The upper limit on threads lies somewhere between 1000 and 10000.

Question 3. There are other runtimes. The third one that is usually mentioned is smol (which I think may be abandoned?) and bastion. In any case, Tokio is by far the most used runtime, and async-std is by far the second most used runtime.

(Sometimes people get confused by the existence of actix_rt, but it is a wrapper around Tokio.)

Question 4. My advice is to just always go for Tokio, but please be aware that I am one of the maintainers of Tokio.

2 Likes

Threads are part of the standard library, they are just not a language feature. Thus, they are just as zero-dependency as async/await. So there's no "builtin vs non-builtin" trade-off here, practically speaking, only technically.

In general, threads are useful in the following situations:

  • you want something quick and dirty and you don't want to dig deep into async. Threads are easier to use in my experience, they do what they promise (they launch actual OS threads and do execute in parallel), but they are somewhat dumb (as in "easy to use suboptimally if you don't know what you are doing").
  • you need immediate control over parallelization, for example, you want to run large-scale numeric computations and you want to schedule threads for optimal distribution of work.

In contrast, async/await is useful if what you are doing is I/O bound, i.e. you can benefit from awaiting instead of locking. However, it's trickier to get started with – async code looks like synchronous code and does practically nothing like it looks like it's doing. Async functions aren't required to map to OS threads one-to-one and they usually don't. They are not suited for parallelization per se.

Define "problem". Spawning more threads than there are cores is generally useless and can slow down your
program if locking overhead starts to dominate the gains earned from parallelism. But it's usually not a semantic error or a source of bugs in itself.