async-std blog says
The new runtime detects blocking automatically. We don’t need spawn_blocking anymore and can simply deprecate it. The new runtime makes blocking efficient. Rather than always paying the cost of spawn_blocking, we only offload blocking work to a separate thread if the work really blocks.
They provide unstable spawn_blocking feature. Do I need to use spawn for both blocking and non-blocking tasks, or is their runtime efficient enough to detect blocking task and put in a separate threadpool.
How do they know if my code hangs in a loop for a long time or not?
Reading the blog, it simply times it, and spawns a new executor thread if the time is too long.
From the top of the post
Note : This blog post describes a proposed scheduler for
async-std that did not end up being merged for several reasons. We still believe that the concept holds important insight and may circle back to it in the future, so we keep it online as it.
I suspect that they still aren’t using this new scheduler based on the note
If you know something is going to block, you're always better off explicitly spawning it as blocking, instead of relying on the runtime to observe the blocking and try to fix it after the fact.
A cunning but perhaps obvious solution.
Not sure I like it.
It basically means my blocking call will hangup my whole async system until it is deemed to be taking too long.
And then adds the overhead of creating a thread to run it on.
Who determines how long is too long to hang up my system?
Kind of smacks of garbage collection pauses.
Is it not better that I spell out what I want so that it gets done at compile time?
This is not actually implemented, and probably wont be, so you must use
spawn_blocking for blocking tasks.
I don't think that is necessarily true. My mental model ( which may be wrong ) is there is at any time a set of async tasks that are ready to run, and a set of threads which are available to execute an async task. We don't want the set of threads to be too large, but on the other hand we want to keep the available CPU processors busy. It seems like the decision on whether to create a new thread ( due to the CPU processors not being busy, together with the thread set being empty, and the task set being non-empty ) might take up extra processing time, so it's not worth the effort. I don't know, the above is just my mental model, I don't know what practical difficulties arise.
spawn_blocking is under unstable feature.
I mean, async-std is mostly dead at this point. It's not unstable in Tokio.
That blog post is more than 2 years old, and it seems they haven't implemented yet.
The actual algorithm is based on Go's scheduler and described here if you are interested: Go's work-stealing scheduler · rakyll.org. Here's the PR that was never merged: New scheduler resilient to blocking · Pull Request #631 · async-rs/async-std · GitHub.
After thinking about the problem a bit more, I think the point is there is no sensible way for an async runtime to decide when to stop creating more threads (if you have long-running or blocking async tasks). It's a decision that probably cannot reasonably be delegated to the async runtime, therefore it's best to have spawn_blocking. A counter argument might be that you could help out programs that fail to use spawn_blocking when it is needed, and perhaps impose some arbitrary limit on the number of threads created, but if the overhead could impact the "well-behaved" programs it could be a questionable feature ( but I guess it could be an optional feature ).
The Tokio blog has a section explaining why the strategy has not been implemented in Tokio. You can find it here.
This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.