Should you only use futures for I/O-bound tasks?

So I'm wondering if you should use futures for I/O-bound tasks and not CPU-bound tasks and does using CPU-bound tasks effect the performance in some way or is it bad for some reason?

Design of async is inherently based on a "trick" for running many I/O-bound tasks without using many threads, but it's not possible to run many CPU-bound tasks without using threads.

So I/O-bound tasks should just have threads only for I/O and CPU-bound tasks should have threads only for CPU. Though what if a CPU-bound task depends on an I/O-bound task?

This is not quite accurate in the case where you have a mix of the two: ideally you should be thinking about having a pool of threads that can, in theory, run any task, and the maximum amount of uninterrupted (blocking) time any of these can use.

This is because there's only a certain amount of hardware CPU cores that can run either type of task, and each thread uses a base amount of resources, so you should avoid creating an unbounded amount of them.

If you're in a purely CPU bound task that could take long enough for it to affect the rest of the system by occupying a thread so another task can't progress, you can break up it's progress by inserting calls to APIs like yield_now in tokio::task - Rust

Alternatively, you can simply take the hit of spinning up a thread if needed with APIs like spawn_blocking in tokio::task - Rust, or use channels to communicate from "the sync world", if you're wanting to use a specialized for CPU task library like rayon - Rust

Though I won't have a lot of I/O-bound tasks so would it be possible to have just one thread pool with worker threads that are mostly for the CPU-bound tasks and maybe one or two for I/O-bound tasks and not use futures for the I/O-bound tasks as I won't have many of them and futures have a bit of an overhead. It's just I don't want a CPU-bound tasks to have to block for a long time because of an I/O bound task it depends on. So maybe everything that depends on the I/O-bound task should be put with it so it doesn't slow the rest of the CPU-bound task down.

Handling either of resources should be separated (separate computers ideally as you'd need different hardware specs for each, separate threads otherwise). So you may want to have say on 8 core machine 1 or two threads dedicated to handling io and the rest for handling cpu-intensive computations (depending on the load balance), because you don't want computations to block io.

what if a CPU-bound task depends on an I/O-bound task?

That's why you have queues. Thread running computations will do as much as it can, than do something else, while the task is waiting for io completion, and when that's done it will continue computation.

There are various guidelines for how often to do these yields, too.

For Web browsers and GUI applications, either 50ms (keep responsive at a 100ms deadline) or 15ms (1 frame @ 60fps) are good choices.

For latency sensitive servers, 500 microseconds is a common choice. For batch processing (not latency sensitive), you can sometimes reach all the way up to 1 second but things like network timeouts may limit you to 360ms, and you should emit progress information instead of a simple yield.

This can actually be a reason to use async for CPU-bound tasks — as long as you keep them on a separate thread pool (perhaps managed by an independent async executor). When the CPU-bound task awaits something completing on the other pool, it will necessarily yield, allowing the executor to schedule a different CPU-bound task. (In contrast, if you run a synchronous-blocking task, you can't tell from the outside whether the task is currently computing or blocking — unless it announces it somehow like rayon does, but rayon's scheduling strategy is not suitable for interacting with non-CPU-bound tasks.)

In general, async is a very flexible mechanism, and the thing to keep in mind is not so much “whether to use async”, but rather, if you are using async, to think about the character of the tasks you're running, and try not to mix tasks with incompatible latency requirements and yield-periods on the same executor.


Let me plug something I wrote which fits in right here: yield-progress defines a common abstraction for “emit a task report progress and yield”, letting tasks be independent of the destination of their progress info and the particular flavor of yield() needed (for an executor, or to check a clock before yielding). It's not all that polished yet, but I hope it'll be useful for those using async in domains beyond simple IO multiplexing.

1 Like

Some form of spawning tasks and then waiting for their results can be useful for certain kinds of CPU-bound computations. But other approaches like scoped threads, query systems, rayon's parallel iterators can be used too.

I figure async runtimes are mostly geared towards IO because that's the pattern you get when you're trying to do several latency-sensitive IO things concurrently while maintaining a facade of imperative code.
For batch processing those patterns may not be needed when one can have a bunch of threads slurp in large amounts of data, process them in memory and then persist them at the end (or on another thread) with large writes.

You might also want to look into other approaches to structure CPU tasks, such as scoped threads, rayon, channels, concurrent data structures, query systems, ...

Thanks so much this really helped explain things to me as I was just really against async as it seemed to block a lot and I didn't think of just stealing work from somewhere and executing that work while waiting for the other work to finish in order to keep being as efficient as possible. So I'm going to use sync when not depending on I/O-bound tasks and async when depending on I/O-bound tasks. Though I may not actually use yielding as it switches the context which can be expensive. Which is why I will just have a Sync Thread Pool, Async Thread Pool, and I/O Thread Pool.