[Windows] Stack Overflow with Larger Buffer in Tokio Example

In this example when I enlarge the the buffer buf to 65536 bytes I get a stack overflow as soon as I connect (on windows). On linux everything seems fine.

I mean 1024 bytes as the read buffer are probably completely fine, but there are two points I am not sure about:

  1. I am worried that I will run into stack overflows everywhere. 64K are not that many bytes even for the default windows stack size (1MB?), that seems to be smaller than the default linux stack size (8MB?). I could heap allocate everything just to be safe, but that is not an elegant solution.

  2. Where is this stack variable located anyways? I guess the normal stack frame where rsp is pointing is cleared when awaiting. Is there a heap allocation with the local variables?

Since this variable is inside an async block, and this async block creates a future, the variable is stored in a field in this future. The tokio::spawn function will eventually heap allocate the future, but the future will be allocated on the stack first and then moved to the heap, and if it's moved by value a few function calls deep until it reaches the allocation, we're going to have several big stack frames in the process of spawning it.

In this case I would probably recommend heap allocating the buffer using a vector.

1 Like

Thanks for the explanaition. This might go a bit too much into the implementation details: why is the future first allocated on the stack? This seems like extra work.

An interesting thing I just found out: in release mode the 64K buffer does not cause a stack overflow. So there seems to be the potential for optimization.

It's allocated on the stack because it's the argument to a function call. You can't really avoid it, although you can hope the optimizer takes care of it. The same issue applies to Box::new

1 Like