How to cleanly handle panics and OOM in a GUI application

This code snippet will allow your application to set a custom panic and allocation error hook.
It's useful for games and GUIs, as it allows you to show a user-friendly error no matter what happens to your program.

#![feature(alloc_error_hook)]

fn set_hooks() {
    // Set the allocation error hook to panic
    // Usually it will abort the program, which means no unwinding or panic handling can be done
    std::alloc::set_alloc_error_hook(|layout: std::alloc::Layout| {
        panic!("memory allocation of {} bytes failed", layout.size());
    });

    // Gets the default panic hook, so after
    // the custom one has run you can still get a backtrace
    let ph = std::panic::take_hook();
    std::panic::set_hook(Box::new(move |info: &std::panic::PanicInfo| {
        // Put custom panic handling here
        // e.g. popup window
        println!("{:?}", info);
        ph(info);
    }));
}

/// Set the allocation and panic hook back to default
fn unset_hooks() {
    std::alloc::take_alloc_error_hook();
    let _ = std::panic::take_hook();
}

fn main() {
    set_hooks();
    let _: Vec<u8> = vec![0; 1_000_000_000_000];
    // This code will never be run in this example because it panics
    // You can use this in your code if you want to unset your custom hooks
    unset_hooks();
}

(Playground)

8 Likes

Cool. :slight_smile:

I knew how to catch panics, but I always wondered what my program would do if it ran out of memory. :smile:

I thought the main reason the standard library chose to translate an OOM into an abort instead of panicking is because the unwinding process (walking the stack, calling destructors, etc.) may in turn try to allocate memory and OOM. Which may lead to a double-panic (which aborts) or break the rest of your application anyway.

Doesn't your approach directly subvert this?

When an allocation fails, the allocation is never made.
That means that in most cases, there will still be enough memory to unwind the stack, and as it goes along more and more is freed up.

A double-panic is indeed possible, but not very likely in practical applications.
Besides, a double-panic would end up being no different from OOM aborting, so worst-case scenario would be working most of the time and aborting in rare circumstance

1 Like

This has been discussed in the Fallible allocation RFC.

  • OOM during OOM unwinding causing abort is acceptable. It's a lesser evil than unconditional immediate abort.

  • Programs don't usually run out of memory literally to the last byte. It's usually a larger allocation or memory fragmentation that triggers it, so there's plenty of smaller chunks of memory available for cleanup and OOM unwinding.

3 Likes

I think oom=panic option isn't implemented yet, so officially you can't handle out of memory errors.

If your application is likely to OOM because of some large allocations you know about, you can use fallible versions of Box and Vec for them. It won't prevent all OOM aborts, but gives you a chance to handle some worst offenders:

There's also unsupported, not recommended option that is Undefined Behavior, but kinda happens to work in practice. If you implement your own global allocator, and instead of returning null on OOM you panic!, it will panic. Some allocator functions are marked as non-unwinding, so Rust has a right to crash your program if you do that, but hey — it was going to crash anyway.

I've previously hacked that by modifying this little allocator:

2 Likes

I'm wondering if the statement above is usually true? I haven't experimented much but from what I've seen before (in C++ with lower ulimit) when I ran out of memory it was always because of Linux overcommit, i.e. it didn't fail when allocating memory but when trying to access allocated memory. In which case I wasn't able to allocate anything else and the "panic handler" could only use its own statically allocated buffer and use fwrite to stream error/log messages and then abort. There was no opportunity for any unwinding.

In any case, thanks for the links!

Linux doesn't run 100% of computers yet. Even on Linux, overcommit can be disabled. Even on linux with overcommit enabled, some obviously-large allocations fail. Even on linux with overcommit even with small allocations, a custom Rust allocator (like aforementioned cap) can make them fail on purpose to enforce a voluntary limit.

When you mentioned that "programs don't usually run out of memory", I thought you had more specific cases in mind. I understand Linux isn't everything, I only shared my experience and was wondering what is yours.

Are you sure that it will not simply optimize out the "unreachable" unwinding path (and so make program misbehave unpredictably, not just crash)?