How expensive is tokio::task::yield_now

I have a couple of components that send messages between each other using tokio::sync::broadcast. As some incoming messages will result in multiple outgoing messages, I would like to avoid overflows of the buffers (which result in lost messages) by yielding after a message has been sent. That makes me wonder how expensive calls to tokio::task::yield_now are. I would assume they are pretty cheap?

Looking at the implementation of yield_now, it's just "return Poll::Pending once, then return Poll::Ready when I'm polled again". So its cost is the cost of returning to the runtime and picking another woken task — it should be comparable to but cheaper than sending a message on a channel that's being awaited, since it's doing all the same things but not invoking any synchronization for the channel itself, and waking the same task instead of a distinct receiver task.

3 Likes

It's interesting that you can call wake before returning Pending.

Out of curiosity and to better understand the mechanism, is this guatanteed to work with any runtime, or just with the Tokio runtime and maybe some other runtimes?

The docs say:

As long as the runtime keeps running and the task is not finished, it is guaranteed that each invocation of wake (or wake_by_ref) will be followed by at least one poll of the task to which this Waker belongs.

I guess that implies that even if a poll is currently in progress, it will be followed by another one? Though it's not 100% unambiguous, I guess.

It has to work with any runtime. Imagine what would happen if the current future sends the waker to another thread to do some background work, and this background work finishes before the future is able to return Pending to the runtime. If the runtime wouldn't handle wake before Pending, you would now get a deadlock.

2 Likes

(@jbe) For further illustration: also consider that futures can be composed in particular with operations such as join (macro in futures or tokio). This means that the future responsible for ensuring a wake (for itself) might only be part of a bigger composite future whose poll keeps running for a lot longer than after that smaller component future’s poll has already ended. Even if the composite future somehow ensured only waking after returning Pending, behavior of the composite future could still be that that wake arrives before the overall poll finishes, and (in a multi-threaded environment) there would be no way to prevent this.

1 Like

@bjorn3 @steffahn Thank you for explaining. Each of your examples make sense to me. So I can count on wake calling the poll again even if it's running. I still think the documentation isn't 100% explicit on that, but maybe it's not a big issue.

Hmmm :thinking:… Sending a message to a broadcast::Sender doesn't yield. It does lock (see source), but that's a non-async lock in this case. So I would say that sending a message doesn't involve any invocation of the scheduler, am I right? I don't really know how expensive scheduling is, though I guess it's the point of async programming to have a lightweight scheduler (opposed to making context switches and letting the OS scheduler kick in).

All runtimes allow doing this and many futures do it.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.