Tokio context switching for spawn vs spawn_blocking?

Can someone help me understand context switching in Tokio for spawn_blocking? Spawning a regular task limits context switching to await points, but how does this translate when also using spawn_blocking? The spawn_blocking code runs on another thread. Does this mean context switching vs other tasks is unpredictable as with regular multi-threaded code? Thx

1 Like

Any code executed with spawn_blocking is ordinary non-async code. Each task takes up a full thread for the duration of the blocking task.


It's probably worth pointing out that there are two kinds of context switching in this story:

  • The first kind of context switch is when the kernel changes which OS thread runs on a CPU core.
  • The second kind of context switch is when Tokio changes which Tokio task runs on a OS thread.

The first kind of context switching always happens, both for async and non-async code. The second kind only makes sense in async code.

The kind of context switch that happens at an await is the second kind. The first kind can happen anywhere in an async task.


Ok great - thank you. Just to confirm: Is the kernel context switch for async code a result of the default multi-threaded scheduler? Put another way, if I wanted explicit context switching only at await points, can I achieve this with using the single threaded scheduler, and avoiding spawn_blocking?

1 Like

Kernel context switching always happens and is unrelated to .await points.

Tokio context switching happens only at .await points, but happens for both types of schedulers.

1 Like

Kernel context switching would be something you configure in the kernel, not Tokio.

It might be possible to use tokio's single-threaded scheduler, and pin that thread to a specific CPU core (using kernel config) in order to achieve this, but you would only want to do that in very specific circumstances (e.g. certain embedded systems).

1 Like

Await points don't do "explicit context switching." Even when you use tokio's single-threaded scheduler, it may choose any future to run if more than one is ready/pollable. See Alice's post for details.

If you want JS-like tasks that run on a single thread and never run concurrently, use tokio's LocalSet.

Otherwise, whatever context-switching the kernel does underneath is not something you'll notice - for instance, the kernel may time-multiplex your Rust application threads (including those in the tokio worker pool) with others if there are not enough core on the machines. I get the sense though that that's not what you're asking, based on your comment that (execution) is "unpredictable as with regular multi-threaded code," you're probably interested in the consequences of the underlying schedulers' behavior on your code.

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.