How do I debug a tokio hang? (tokio-0.3.3)

I recently updated my code from tokio-0.2 to tokio-0.3.3. It mostly went fine but the program stops handling input events (though network events are still handled) after a few input events and I have no idea how to debug this. Can anyone give me some pointers?

The PR that updates tokio from 0.2 to 0.3.3 is here. The input handling code is ported to tokio-0.3.3 with 24 lines of change, in term_input/src/ If you'd like to see the problem, run cargo run --exaple chat in the directory libtiny_tui. Make sure you're on branch tokio-0.3.2 (note that the branch name says 0.3.2 but it actually updates to 0.3.3).

Any pointers would be appreciated. Thanks!

I see you're using AsyncFd on stdin. That's always a bit dangerous — there's a reason that Tokio's stdin type uses blocking IO.

Edit: I see you're talking about it on the github.

Try wrapping parts of the code in a struct like this:

use std::pin::Pin;
use std::task::{Context, Poll};
use std::future::Future;

struct TimedPoll<F> {
    future: F,

impl<F: Future> TimedPoll<F> {
    pub fn new(future: F) -> Self {
        Self { future }

impl<F: Future> Future for TimedPoll<F> {
    type Output = F::Output;
    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<F::Output> {
        println!("Start poll");
        let before = std::time::Instant::now();
        let future = unsafe { Pin::map_unchecked_mut(self, |me| &mut me.future) };
        let res = future.poll(cx);
        println!("Stop poll: {:?}", before.elapsed());

You can wrap it like this:

TimedPoll::new(async {

If there are any parts of the code that print a start poll without a stop poll, or a stop poll with large durations, that part is blocking the thread, which is one way a hang can happen.

I agree that using AsyncFd (or previously PollEvented) on stdin is not ideal, but I'm OK with that. This is not a library that I publish on, and in my application I know how to use it with all the caveats. The previous PollEvented version worked great so far.

I'll look into your suggestion now.

I tried replacing


TimedPoll::new(async { }).await

I realized that all polls on input return, there are no hangs. The program also handles other events just fine, like resizing (using tokio's signal handling) and network events. So it seems like there's a problem with the updating the readiness of stdin.

Basically when I press keys I see stuff printed and keys handled for 3 times, and after that it just stops. Rest of the key presses does not trigger a poll.

Ah, you probably need to call poll_read_ready again after the call to clear_ready. The AsyncFd only promises to wake you up, if you have gotten a Poll::Pending from it.

Try replacing the Poll::Pending return on line 198 (after diff) with a recursive call to the method, or make the method use a loop around that match and put a continue on that line.

Another option is to manually emit a wake-up at that pending to have the executor come back immediately.

Thanks. I did this change

diff --git a/term_input/src/ b/term_input/src/
index bb23791..c4c99dd 100644
--- a/term_input/src/
+++ b/term_input/src/
@@ -191,12 +191,10 @@ impl Stream for Input {
         // Otherwise read stdin and loop if successful
         match self_.stdin.poll_read_ready(cx) {
             Poll::Ready(Ok(mut ready)) => {
-                if read_stdin(&mut self_.buf) {
-                    Input::poll_next(self, cx)
-                } else {
+                if !read_stdin(&mut self_.buf) {
-                    Poll::Pending
+                self.poll_next(cx)
             Poll::Ready(Err(err)) => Poll::Ready(Some(Err(err))),
             Poll::Pending => Poll::Pending,

which worked! I'm not sure why though. What's the difference between tokio-0.2 and 0.3 that requires this change?

Another option is to manually emit a wake-up at that pending to have the executor come back immediately.

If I understand this correctly, then I prefer the current solution (above) as it doesn't require back-and-forth between the executor and the current task.

From PollEvented::clear_read_ready.

Clears the I/O resource's read readiness state and registers the current task to be notified once a read readiness event is received.

The clear_ready on AsyncFd has not such clause about registering the current task to be notified.

1 Like

Ah, that makes sense, thanks.

Currently my poll_next function looks like this:

    fn poll_next(mut self: Pin<&mut Input>, cx: &mut Context) -> Poll<Option<Self::Item>> {
        let self_: &mut Input = &mut *self;

        'main: loop {
            ... yield existing events in the buffer ...

            // Read stdin and loop if successful
            let mut poll_ret = self_.stdin.poll_read_ready(cx); // stdin is AsyncFd
            loop {
                match poll_ret {
                    Poll::Ready(Ok(mut ready)) => {
                        if read_stdin(&mut self_.buf) {
                            continue 'main; // yield parsed events, then poll again
                        } else {
                            poll_ret = self_.stdin.poll_read_ready(cx); // poll now and check new readiness
                    Poll::Ready(Err(err)) => {
                        return Poll::Ready(Some(Err(err)));
                    Poll::Pending => {
                        return Poll::Pending;

Any comments on correctness of this? It works nicely, but I'm not sure whether I'm calling clear_ready at the right place. read_stdin returns false when stdin is empty so the then branch is taken when we were able to read stdin.

You should not always call clear_ready. It should only be called if a read sys-call has returned WouldBlock.

It is critical that this function not be called unless your code actually observes that the file descriptor is not ready. Do not call it simply because, for example, a read succeeded; it should be called when a read is observed to block.


Such an incorrect call would cause issues because if you call clear_ready while there is still unread data in the pipe, then poll_read_ready will continue to return Pending until even more data arrives, thus you don't read the remaining data that was already there.

read_stdin reads all available data from stdin so the assumption is the next read would return EWOULDBLOCK. I don't really check that though. I simply read the number of available bytes using ioctl and then read that amount. One assumption here is that the when ioctl says N bytes is available then I assume to read N bytes in the next read. I have a debug assertion that fails when this doesn't hold and I've never seen this fail.

Perhaps I should refactor this part of the code too and start read-ing until it returns EWOULDBLOCK.

Sounds like there is a race condition here where more bytes are written to your stdin between your calls to ioctl and read.

Yeah, I think you are right. Better refactor this to only call read until EWOULDBLOCK is seen. I guess for that I have to first set the fd into non-blocking mode (which is probably why this is implemented using ioctl currently, I probably couldn't figure out how to set it to non-blocking mode, years ago when this code was first written).

Thinking about this more, I think this would not be a problem if the old PollEvented::clear_ready was edge-triggered. I think that's not the case though. Quoting the documentation:

Clears the I/O resource's read readiness state and registers the current task to be notified once a read readiness event is received.

I'm guessing this means it's edge-triggered?

How does edge-triggering avoid the race condition?

Sorry for the confusion.. It doesn't eliminate the race condition, but the race is simply not a problem. If ioctl returns N then M more bytes become available before I read the first N bytes, the stream would be polled again as the stdin would still be ready for reading (assuming level-triggered). So I do one more iteration but read the bytes without any delay.

With edge-triggered I don't read the new M bytes until there's more data available.

I'm not sure I understand.

Here's a simpler version of my code:

pub struct Input(AsyncFd<RawFd>); // assume RawFd implements AsRawFd

impl Stream for Input {
    type Item = std::io::Result<Event>;

    fn poll_next(mut self: Pin<&mut Input>, cx: &mut Context) -> Poll<Option<Self::Item>> {

poll_next is called when the fd is ready. Previously as we discussed above it would first call fcntl to read bytes available, then call read to read that exact amount.

The problem is there could be more bytes available after fcntl but before read. In that case we don't read the full contents of the fd.

Is this a problem?

  • If poll_next will be called when there is data available in the file descriptor after it returns, then no, because it will be called immediately after it returns and will read the new bytes. (edge-triggered case)
  • If poll_next will only be called when there is more new data available after it returns, then yes, because the data that became available between fcntl and read will only be read after more data is available. (level-triggered case)

I hope this makes more senes?

I mean, let's say you do the fcntl thing, read some bytes, and return Poll::Ready(Some(...)). Then, since you didn't return Pending, you can expect the caller to call you again soon. However, if you cleared the readiness flag, then your call to poll_read_ready inside that next call to poll_next will return Pending, causing you to return a Pending on the second poll_next.

So if poll_read_ready won't return Ready when I cleared the readiness flag but there is more data available (which is I call "edge-triggered", maybe I'm misusing the term) then you are right that this is an issue. My understanding that this is the case, hence fcntl + read was a problem.

If poll_read_ready was returning Ready when I do clear_read_ready but there is more data available then fcntl + read would be fine.