Tokio poll_future: panic triple - catch_unwind, try, do_call

Hi, I'd appreciate any insight into the following.

Context:
I have an in-process HTTP hyper-client and hyper-server (serving "hello world!" as static bytes) running on separate threads. The server runs in its own Tokio runtime. The client requests are made via futures. I managed to get to the point where it all runs without error. If I remove the single line calling the hyper-client I record approx 5M "psuedo req"/sec (Celeron(R) G550T @ 2.20GHz, 2 cores). During that time the CPU is saturated, and it appears I have the client side setup OK.

When I activate the client I initially observed approx 15K req/sec, and the CPU has about 2% idle. That was a bit lower than I expected (based on testing kcup and miniserve locally using wrk).
On investigation (disclosure: this level of performance analysis is relatively new to me) I observed that after a call to tokio::runtime::task::harness::poll_future there was almost always a 'panic triple' (essentially the same as shown in the attached).
This, and 2% of the CPU now being idle, made me think there was some issue with the way I'd setup futures to run the hyper-client.

There was.
I was calling the code to setup the hyper-client futures twice. Fixing that I observed approx 36K req/sec. Problem solved - I thought. Well, sortof, kinda solved - there is still the 2% idle CPU puzzling me. However when I checked the flamegraph (attached), I noticed the 'issue' still appears. At bar #9 (counting down) there is the same 'panic triple':

Now I'm puzzled:

  1. Was my original guess wrong and these 'panic' lines can safely be ignored?
  2. Was I correct, but there is still more futures congestion that I need to fix?

Tokio always uses catch_unwind when it polls a task, since that's how it avoids having panics take down the runtime.

2 Likes

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.