Where do computationally heavy parts go in async model of Rust

From the tokio guides:

Tasks are passed to the runtime, which handle scheduling the task. The runtime is usually scheduling many tasks across a single or small set of threads. Tasks must not perform computation-heavy logic or they will prevent other tasks from executing. So don’t try to compute the fibonacci sequence as a task!

(Emphasis mine)

Given the flow being;

  • Receive a message over TCP socket
  • Spawn a async task
  • Do some heavy calculations based on the message
  • Respond

However, the guide advises against Step 3 above. How do you fit a computationally intensive task in the async model?

Computationally heavy tasks are normally offloaded to a threadpool. You'll normally be given back a Future which will resolve when the task is finished.

You may be looking for the tokio-threadpool crate.

1 Like

It's a bit confusing, isn't tokio-runtime itself a ThreadPool?

The ThreadPool configuration schedules Futures across a thread pool. This is also the default configuration used by the Tokio runtime.

Would it mean running 2 thread-pools within the same application? That would really complicate the communication topology.

1 Like

Because even on threadpooled runtime, without offloaded computation threads N amount of heavy task can makes the server non responsible for certain amount of time. For servers it's preferable to reply lighter requests quickly even on heavy computation, even we need to sacrifice throughout a bit for it.

1 Like

Feels like a use case similar to that served by Celery/Sidekiq. Thought one would not be in need of something like that with Rust.

Tools like Celery and Sidekiq are what you might reach for when the task you're doing is larger than one machine can handle. Whereas a threadpool is used when your task fits on a machine, but you don't want to tie up the IO thread with long-running tasks.

It's quite conceiveable that one day there'll be a Rust equivalent of Celery so you parts of your system can throw background tasks onto a job queue be consumed by a bunch of worker machines.

Looking through the tokio repository and some of their issues, it seems like the crate is still in a state of flux after restructuring from the previous "lots of little crates wrapped behind the tokio facade" structure to a more centralised one. This comment mentions that they're still refining how offloading of CPU heavy work will be done, but in the meantime you can use tokio_executor::blocking::run() from the tokio-executor crate and the provided closure will be run on a threadpool dedicated to blocking operations.

I don't see how it would complicate things. Even if it started a second threadpool, you'd normally use a mechanism like channels to synchronise futures, so the futures aren't really concerned about which thread they're running on... Having half a dozen threads sitting around unused feels like a bit of a waste though.


One needs something like async in pretty much all languages. It's either that or use threads.

Consider you want to do something like this (In a sort of Javascript like pseudo code)

function readStreamA (a) {
    while true {
        print (a.read);                      // Read waits for a to get data.

function readStreamB (b) {
    while true {
        print (b.read);                     // Read waits for b to get data.

Clearly this will not work. If readStreamA waits forever for data then, execution stops in the read call. The function never returns. readStreamB never happens. And vice versa and etc.

To get around this one might use threads to run the functions instead.

thread.spawn (readStreamB(a))
thread.spawn (readStreamB(b))

Here the spawn creates a thread to start each function which then run around forever. If one is waiting the other can proceed when it has data.

Creating threads is a heavy weigh, time and space consuming mechanism. If you need thousand of threads you will use a lot of memory in their stacks. You will waste a lot of time swapping the CPU from running one to another (scheduling overheads)

To do this more efficiently one want to run many of those loops on a single core. After all, they are doing nothing but waiting most of the time. Javascript does this with it's event loop. which runs everything on a single thread and uses call back functions to run bits of code when data arrives:

a.on_data  (function (data) {
        print (data)                       // Lamda function called when a has data

a.on_data (function (data) {
        print (data)                       // Lamda function called when a has data

As JS only has one thread, clearly long computations in any of those event handlers will hang up everything and cause problems.

The whole idea of async is to do what javascript is doing but without all that messy call back handler syntax.

But of course we do actually have many cores now a days, so why spread the work around for performance. Enter the thread pools!

I'm not sure any of this async stuff is actually ever necessary unless you do actually need thousands of threads in your program. Async should be avoided otherwise. Regular threads are just fine and simple in many cases, especially if you a lot of compute work to do.

As the async book says in it's introduction:

"It's important to remember that traditional threaded applications can be quite effective, and that Rust's small memory footprint and predictability mean that you can get far without ever using async . The increased complexity of the asynchronous programming model isn't always worth it, and it's important to consider whether your application would be better served by using a simpler threaded model."



Rust already has crates for most common queue implementations, including redis, sqs and amqp.

Yes, but they are at varying levels of feature completeness. Furthermore, Celery/Sidekiq's main offering being distributed task runners (not just queue connectors) which is currently absent for Rust.

Celery is convenient wrapper over message queue client + some tooling and set of conventions, it doesn't do any distribution itself.
It would be nice to this kind of tool for Rust, but not having it doesn't prevent anyone from distributing tasks.