Quadratic slowdown with many nested futures


#1

Can the futures crate exhibit quadratic performance in the number of .and_then() calls?


#2

That has certainly happened before, but that instance appears to have been fixed. What Rust version are you seeing this on?


#3

Note that the issue above is talking about compile times rather than the efficiency of the actual code.


#4

The problem is that the result has O(n) nested enums. So an O(n) call stack is built and torn down O(n) times. The version using async/await has no such problem, since it uses an O(1) deep call stack.


#5

Ah, you’re talking about runtime performance. Sorry, the compile time was what sprung to mind, and I assumed you’re referring to that.

You’re referring to the Chain enum, right?

I assume that’s because all state is captured in the generator and it yields across successful returns?


#6

Yes. The problem is that deeply nested enums are generated. There would be no problem if they were flattened.

Another problem: if you need to select between N different futures and run them concurrently, you will use O(N^2) time if you use a combinator.

Root cause in both cases is that events propagate top-down, when they should propagate bottom-up, as they do in all other languages. Top-down event propagation is linear in the size of the call stack, whereas bottom-up takes amortized constant time.

With bottom-up propagation, the event loop holds, in addition to the future, a token that tells which future to poll next. It them polls that future directly, without checking its parent. If and only if that future returns Async::Ready does the parent future get polled.

I believe the reason that these problems have not appeared before is that few Rust programs use deep call stacks.