Idiomatic way of handling stack overflow / OOM?

Context: this article + this "review". In particular:

For memory allocations, you’re mostly out of luck in Go because of its GC. I have more hope for Rust, where the Kernel integration work is slowly forcing it to grow up and build some more reliable interfaces that don’t panic, e.g. Box::try_new(). But this will require refactoring all existing code.

Still, I’m amazed that this is supposed to be the state of the art for systems programming. Heck, even Java handles this much better: Catch an OutOfMemoryError and apologize to one client instead of killing the whole server and interrupting service to thousands of clients. Exceptions can handle all these system errors gracefully. And in your own code, you don’t have to refactor the world every time you discover a new edge-case and add error checking to a corner of your code.


Given the focus on overall safety (memory safety, in particular), having no stable version of the compiler that would allow for fallible allocations - be it stack or heap based - does seem somewhat out of place. I see there are only a handful of articles (#1) floating around the subject. Since the post of @kornel back in January of the last year (#2), has there been any progress?

In addition to the heap, couldn't help but wonder about the stack exceptions as well. Is there any idiomatic industry-standard way of handling SO, to begin with? Where would Rust fit into it?

5 Likes

To be clear, handling OOM is a matter for libraries, not the compiler. This distinction matters because "Rust" (the language) does support fallible allocations. Indeed the lower level APIs allow you to handle alloc errors how ever you wish. This all means that crates like fallible_vec can and do exist.

What is true is that the higher level APIs in the standard library call handle_alloc_error on failure, which can raise an exception but not return an error value. Having more fallible APIs in the standard library is, I think, desirable but there has been some debate about if they should be separate data structures. E.g. have try_ versions of every Vec function bloats the API of Vec and doesn't ensure that non-try variants are never called.

OOM and the way it is handled can be a PITA, that's for sure. But like leaking memory (e.g. using Box::leak()), it isn't a safety concern: there is no UB that can be triggered due to an OOM.
That's likely why is hasn't been a focus up until now.

The general solution is based on ulimit, which allows one to set the default stack size for processes.
The caveat there is that ulimit doesn't really exist on Windows, and on MacOS it is rather limited in capabilities IIRC. Windows users are SOL with this one.

There is however a 2nd option: start a 2nd Rust thread. With the API provided by the thread module one can set the stack size for a new thread. One limitation of this approach is that this cannot be applied to the main thread.

2 Likes

Huh, the blog post this is quoted from is so terrible that I kinda glossed over that point. Thats actually a good question.

1 Like

In cases where you may need lots of stack (typically recursion of unknown depth, such as in a parser), one solution is stacker, which adds segmented stacks to Rust: additional stack memory is allocated (from the heap) when needed. This way you no longer have the problem of pre-allocating a stack of sufficient size for everything the program will ever do; stacks become just more heap usage.

stacker is even used by rustc. The big caveat is that it is not portable to everywhere Rust programs can run (but it will disable itself to fall back to an ordinary fixed stack).

3 Likes

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.