Panicking: Should you avoid it?

I agree.

Thing is, if I choose to use [ ] because I "know" the index will never go out of range, then if it does that is a bug and a panic! is appropriate. May be I "knew" it was OK because of the way I calculate the index but turns out I was wrong.

I'm most likely to use [ ] because get() is long winded and clumsy and generally I "know" what I want is correct! And I don't want to mess with with looking into the returned Option to see what happened.

1 Like

This is really bad and opposite Rust philosophy. And very, very bad practice. Because you are practically saying you are too lazy to check if your application works well and don't care if it will crush.
If you where developing autopilot for automobile and because of this your philosophy your autopilot panicked and stopped working while driving car at 100 km/h and it crushed in a bus you would need to take responsibility for it. If not real, but at last mental responsibility of knowing that your laziness killed many people.

If you are developing some notepad or something similar here will not be much responsibility if your application crush, but you will risk of going bankrupt if someone else takes his job seriously and will make alternative application which don't crash for no reason and your users will move to his application.

I personally would use only in simple like example I have given and for most other cases would use get(), because I really don't want my application to panic. Especially if it don't even need to panic because I can use get()

Attitudes need to change a bit when dealing with life safety, sure. But even then, panicing is often the better choice: If your program encounters an unidentified system fault, there's no guarantee that attempting to continue operation won't make the underlying problem worse. Crashing cleanly and letting a completely separate monitor/failsafe system decide how to continue is one of the more robust ways to handle these sorts of unexpected error.

It's unclear which of these two hypothetical companies will be able to produce the most reliable software over time. The one that panics at the first detection of an abnormal condition will annoy users more in the short term, but may be in a better position to diagnose and fix the problems that arise: If you try to handle an unexpected error, there's a significant chance you'll do it incorrectly¹ and have some kind of corrupted state going forward, which can produce subtler, harder to debug faults.

¹ Because the error is unexpected, any handling of it must necessarily involve an educated guess about the root cause. We can't expect these guesses to be correct 100% of the time.


True, but I don't mean to continue, I mean to not panic.
If here are problem, you return error to caller and he makes decision how to fix it. If it can't fix it would return error up the leader.
At some point, it will possibly to decide if application needs to drop command and return to state before command was called. If it needs to save data and close application or other possible solution.
And this decision should be taken by code who knows more about operation and not vector.

So, I think panic should be use as little as possible.

This thread question is: Panicking: Should you avoid it?
And answer is big YES. As much as possible. If possible you should not use panic at all. But this very depends at your application architecture and hardware.

In 2010 on AutoCAD I had horrible memory leak. It would even crash windows after using all RAM.

This is absolutely the case, in my experience. I used to work at a place that had previously just HRESULTed most everything, and just propagated the errors up to the caller. That meant things were often broken, but with no way to track down where anything went bad, and there was a ton of never-exercised error handling code that probably didn't work.

It changed to using an CRASH_UNLESS macro, and reliability and productivity increased. Less pointless error handling code was being written for stuff that never happened. When something did go wrong it couldn't just be a suppressed assert, it had to be fixed. And that fixing was easier because it crashed right at the problem, so one could get memory dumps and such showing program state.

If you, as the programmer, think something can't happen, assert it with a panic. Then if you're wrong, go fix the code to handle that situation.


As with all things: it depends. Anyone who gives an answer without qualification of in what cases they're considering is at best overly confident at extrapolating their own experience.

BurntSushi's guideline is what I'll typically and happily point people at. Below is generally how I think about panicking. Warning: contains strained analogy of program tasks as ships.

To reiterate what's been said a couple times, the first guideline would be if it's relatively simple, generally prefer returning a meaningful Option or Result to panicking. This passes the question of how to handle the problem to the caller who likely has more context than you do about what that failure means. Or at least, they have a bigger picture view along with whatever context you gave them as the Result::Err case. This of course highly depends on the exact API you're implementing what this best should look like, but -> Result works fairly well for any high-level atomic operation, where either it happened successfully or it didn't happen and you can report what prevented it from happening.

The counterpoint to the above (as a category, generally "application error") is programmer error (also "logic error"), where the program has entered an unexpected state. Knowing nothing else, generally the best thing to do in this case is to panic!. If you've been asked to do something which doesn't make sense (e.g. index an object where there is no object at that index), a panic! is in essence a "controlled crash;" depending on application configuration, this should safely take down at least the task which panicked, along with perhaps the thread or entire application. If there's no way to accomplish what's been asked, panicking is the correct outcome.

The alternative which this controlled crash exists to prevent is the result of just attempting to do the thing which doesn't make sense anyway, producing at best unpredictable behavior as the program explores unintended code paths, and at worst the UB of doing things you've promised the compiler, optimizer, and virtual machine you'll never do, and at which point even the debugger may lie to you, because literally all bets are off. You don't want to visit the realm of UB, because nothing makes sense there.

What a panic! means depends on the application. A panic will typically display some sort of debugging message (this behavior is controlled by the panic hook and typically prints a message and optional stack trace to stderr) and the application gets to choose between -Cpanic=abort, in which case a panic then exists/aborts the program, and -Cpanic=unwind (the default), where the stack is unwound similar to how exceptions work in other languages, running destructors along the way to perform cleanup. When using -Cpanic=unwind, the application can use catch_unwind to terminate the unwinding process and do some sort of high task-level handling of the panic, such as logging the failure of that task and moving on to the next. If the application doesn't do this, an unwind takes down the thread, and the application as well if (and only if) that was the main thread.

For a library, then, you should always strive to provide an API that can be used without risking panics. Panicking APIs are often an ergonomic desire — for relatively simple preconditions like "index is in bounds," it is much simpler[1] not to have to continuously say "no, it's in bounds, I promise, I checked." But as a library, you should always offer some way at least to run a pre-check (where that's reasonable and doesn't suffer from <abbr title=time of check, time of use">TOCTOU issues) and ideally to get back Result or Option where more it's more complicated checks along the way.

On the other hand, you should always strive to avoid returning a default value indeterminable from a successful result. Program design isn't a game of social standing; you don't need to pretend nothing went wrong when you know it did. Nothing is worse than a program saying it's succeeded and hiding the fact that it didn't. At a high application-level loop, it makes sense to take failed tasks, discard them, and move on to the next thing. At any level other than that executor, minimally give your caller the chance to react to the fact you weren't able to accomplish the task you were told to do.

The thing about a panic! is that you're saying that the best response to what's gone wrong is to, well, panic, and just give up on whatever was currently being done. This is absolutely fine in many cases; programs are giant balls of messy state, and it's frankly a miracle that they stay in a reasonably functional one most of the time. If there's no good way to continue doing what you're supposed to be doing, panic!king is the correct course of action, because it would be worse to continue on a sinking ship than to admit it's going down and save what you can. In this strained analogy, that would be by panicking/unwinding, running Drop handlers to clean up state, and allowing other tasks (ships?) to continue without corrupting their state as well. Of course, the other ships tasks need to be prepared for you to panic and not panic themselves when they see your SOS and whatever state shared resources have been left in — lock poisoning exists to protect the other ships tasks from seeing potentially corrupted state caused by you panicking, typically by causing them to panic as well (via the .lock().unwrap()).

But in other cases, an error isn't a panicking matter; it's within expected operating procedure; you should record it and move on. In these cases, panicking is often an overreaction. But also, if what you are is just a oneshot CLI application, then a quick panic and exit can be a functional way to handle most errors, because the user at the CLI is better equipped to handle whatever it is (so long as you give them sufficient context).

What matters as a takeaway is that it always depends. All else being equal, it's generally preferable to give your caller more options by giving them a Result, but the situations where panicking is appropriate abound; all else is rarely equal.

  1. With my paranoid curmudgeon hat on, if you provide the panicking option, that also lessens the likelihood some over-confident idiot will use the Option version and utilize unsafe to unwrap_unchecked and upgrade a logic error into a safety issue in the name of performance because "I checked that, I promise, no need to check my work." I've been that idiot before and will again — it's for this reason I highly recommend any O(1) checkable unsafe precondition to be checked when cfg(debug_assertions) internally to any _unchecked APIs. Ideally with a noisy :warning::boom: emoji-laden panic screaming that this would've been UB in a release build. ↩︎


You say

But this is literally what all the "do panic" people are saying too: if you're "110% guaranteed" that something won't happen, but it does, is the only situation we care about: returning an error when there's an error is hardly controversial!

So the question is more about what does "guarantee" mean here: internally validated state that means you've made a mistake? OS or hardware returning "impossible" results? Misuse of your library API that means your caller has made a mistake? All of these are arguably situations that should never happen, and mean that there's little point in trying to continue as clearly someone has screwed up and all bets are off as to what's going to happen now, and a panic with the exact unexpected situation at the exact point of the problem is likely to be more helpful in fixing the problem than returning a, possibly generic, description of the problem to someone who has no idea what your code is even doing (again, arguably).

The trivial answer to the title of the thread is "yes, of course you should avoid having bugs", because panics exist to help you avoid having bugs. I haven't yet seen an anti-panic argument as to what you should do when you clearly do have a bug somewhere, only assumptions that you're panicking in a non-bug situation.


There's two schools I've seen.

The unfortunately larger and louder group carry the old C ethos of "just don't have bugs." I think it's fair to say that this is at the very best impractical.

But there does exist a significantly smaller group, who aren't concerned with "no panics" as much as they are with program correctness in general. This is the group of people perhaps most willing to utilize proof assistants to eliminate classes of bugs from software systems.

Notably, this group is generally fine with packing paths existing, so long as they have some way of verifying they're not taken given some assumed operating conditions.

The case I'm most aware of at the moment, just due to exposure, is the kernel. The OS kernel has some unique constraints that nobody else has, because the kernel is the last point of contact that exists. A serial debugging port? The kernel is what's in charge of actually writing to that port and not bricking your silicon if the connection is intermittent.

The kernel is in the unique position where it just does not want to ever stop stumbling forward (so long as it can prevent permanent physical damage, at least). A heavily corrupted diagnostic on serial output is miles better than a device completely unresponsive to any input.

The proper "never panic" position (that isn't just unreasonable ego bait) says there's always some safe default behavior. Null propagation or what have you; if the input state is garbage the output garbage will be, but it will be exactly as garbage as the input but no more.

Rust is, perhaps oddly, too high level for this to always be a functional operating mode. With more C language style designs, there's always some "null object" which can be used as a default. Sometimes it's a literal nullptr for pointer structures; typically it's the zero initialized form of a structure for simplicity. But there's always something you can return (and hope the caller handles you returning). The robust C software manner is to handle getting arbitrary garbage, and produce something perhaps similarly arbitrary but hopefully not more garbage.

Rust's type system doesn't want to work in that way, though. The type system is designed to provide guarantees that data is in a specific shape, using Option or Result or other enums to describe cases where the "null/error value" is needed, but exclude it where it isn't.

If people are familiar with monadic thinking, it could be useful to think of Rust operating under a global MayPanic monad; the normal happy path operates normally, but any code has the ability to panic! and exit that monad (e.g. via unwinding) because normal execution failed.

Under a more C-like model, that concern is spread all throughout the code — everywhere needs to consider the possibility of an unexpected execution, and contain code to handle and continue on with a lack of meaningful information.

Rust's model is that doing so is impractical and a lot of busywork for little benefit. In the odd unforseen[1] case where there isn't meaningful information to fill the type system we've built for ourselves, it's acceptable to panic! and fall back onto some orchestrator to handle the mess.

With normal -Cpanic=unwind Rust execution, that's an unwind back to the application loop, or even out of main. But it's also Erlang-style watchdog programs restarting the main server, or even more granularly just retry loops on some potentially flaky IO operation.

The difference is utilizing the correct error recovery mechanism for the problem at hand. Most of the time it's most appropriate to report to your superior that "shit's fucked, yo." Sometimes it's panicking and letting the orchestrator deal with the mess. And sometimes, though relatively rarely, it's legitimately to "be the better actor," as it were, and try to make sense of the garbage you've potentially been handed. The most important thing, though, above all else, is that your strategy is deliberate, and that the people/systems who deal with you know which strategy you're going to take. Nothing is worse than mismatched expectations.[1:1]

This is a fun and surprisingly deeply accurate analogy — consider a manager giving an office worker a task, then that worker unexpectedly just panicking halfway through, or silently continuing working with information they've determined is bad because they don't want to cause a fuss; both cases illustrate well the failure point of mismatched expectations, though it's a bit harder to construct an analogous real-world example where expecting panicking[2] or silent acceptance[3] of bad data is an appropriate expectation — but I've made the point I wanted to make, so I'll leave it off there for y'all to ponder.

  1. This paragraph is pretty shameless QotW bait. At least it's potentially helpful QotW bait. ↩︎ ↩︎

  2. Perhaps, in a gamedev environment (since that's what I'm most intimately familiar with), it's (sometime) appropriate for me to go straight to the leads (catch_unwind) to say some task I've been told to work on doesn't make sense, or isn't scoped in a proper way, etc., and move to the next task, rather than try to directly correspond with the task requestor. Good production/management is trying to minimize where that might be necessary/desired. ↩︎

  3. All of the examples I can think of here boil down to "that's what the client requested and they refuse to reconsider, so just build what they've asked for." ↩︎


Just curious. Do you make sure to check the reserve of enough space before you push() to a vec to prevent OOM? (try_reserve() or something)

I sense self contradiction in what you are saying about panicking vs handling the error:

Previously you stated "it mostly depends if your application can fix problem or can ignore it" and "Everything comes to architecture of your application."

Now you categorically state "[panicking] is really bad and opposite Rust philosophy" and "very, very bad practice".

I certainly did not say that I don't care if my application crashes. On the contrary. What I was getting at is that sometimes immediately dying with a panic! is the only way out when something unexpected happens, there is no way to handle the error in code that can make the situation better.

You presented the example of an automotive application, we are talking safety critical software here. Often in safety critical systems if an unexpected error detected the software cannot produce the correct result, the system has failed. There is no alternative thing to do even if you do try and handle a Result what would such handling do?

Having worked on safety critical systems for a couple of decades, from process control systems to Boeing 777 Primary Flight Controls this is the way I have seem it done. Of course safety critical systems also have other measures in place, from hardware watchdogs to multiple redundant processors.


This is a good read about panic To panic! or Not to panic!

This is bad place to use panic, because you don't want to crush your app just because user entered 200

pub struct Guess {
    value: i32,

impl Guess {
    pub fn new(value: i32) -> Guess {
        if value < 1 || value > 100 {
            panic!("Guess value must be between 1 and 100, got {}.", value);

        Guess { value }

    pub fn value(&self) -> i32 {

You want to inform user about bad entry and let him enter something else. Also, user entering 200 is not a bug, it is wrong use of app.

In the end you need to choose best choice for your application and applications written by new programmers are simple and have little places where it should panic.
Boeing Controls are not simple applications, they are complex systems and as ZiCog said, panic don't shutdown whole system, it only crash one process only and other processes or hardware recovers from it and continue work. In this case panic is more like Result where part of calculations are discarded and recovered from. It is not the same as panic in simple application, where application is closed and nothing is left.
If you are working on multi threaded application and panic in one thread will not kill application itself but only this thread and your are sure it is impossible to recover data from this thread it is ok to panic.

As I mentioned many times it all depends at your application and architecture of application or system. Lets take server as example. If server like Facebook crushed and stopped working just because index gotten from one user where out of index bounds and server administrators would need sometime to start server again so users could use Facebook again it would be really bad. Even if Facebook have code who can panic only one process would panic and it will not crush whole server.

panic for new programmer and professional programmer have different meanings. Professional programmer knows ways to recover from panic and where he should panic. New programmer don't know them.

I imagine Facebook is more like a Boeing 777 PFC. They have hundreds, thousands of servers. If one panics and goes down the user still sees another. I have read that at Google they don't even bother to get a dead server up in a hurry, it's hardly noticeable that it has dropped out.

I observe that the Linux kernel has a panic. If it gets into a situation it cannot handle it just stops and spits out a stack trace or whatever. I'm no Linux kernel author but I guess my view of things comes from working on many systems where, like the kernel, there is no place to go in face of many errors, the only thing to do is stop before you make things worse!

This was certainly the case when I worked there many years ago; I can only imagine that it's gotten more true since I left. In order to test all these recovery systems, they'll unplug an entire datacenter several times a year just to make sure that everything keeps working. From the users' perspective, there's nothing to see during these events.

1 Like

With the error handling library I use, snafu, one can have it take a backtrace when a Result::Err is constructed (and include it in the Err). Do the more common anyhow/thiserror not allow that?

I acknowledge that that still wouldn't directly provide

(I realize that the first quotation is not about Rust, but I assume it's intended to apply to Rust as well, given that this thread is about error handling in Rust.)

Note that when I said "HRESULT" I meant specifically HRESULT - Wikipedia. It's an old C thing that doesn't usually carry useful context. Certainly in Rust it's possible to do more by using a more complex error type.

But remember I'm also talking about the place where you, as the programmer, suspect it can't actually fail. If you know for sure it can fail, then absolutely return an error with nice context, and have a test for the error handling.

Whereas if you've never seen it fail -- despite some basic testing attempting to make it fail -- don't bother trying to think about what you might need if it ever fails. Just let it panic, then if it ever does you'll know how it can fail and have a much better idea of what context would be useful to add. And if it never panics, then great, you didn't write a bunch of useless code to add context that'll never be seen.

Said otherwise, if you'll need to make a new build because of it, just let it panic, and get the associated bug report.


I was not saying anything about when and whether one should or should not panic. I was questioning only the implication in the two posts I quoted that a particular advantage of panicking over returning Err is that panicking provides a backtrace to point to the spot at which the panic started, seemingly implying that returning an Err can't do the same. Although your reply quotes my actual question, I don't think it addresses that question.

This is a very interesting discussion! Perhaps there is a consolidated "rule of thumb" of sorts, that I could remember when programming?

Yes, as has been said many times: correct Rust programs shouldn't panic.


But just to be clear: "not panicking" is not the same thing as "doesn't include any panics"


I guess it's too late to change design of Rust but I think the panic!() should always report diagnostics (if at all possible) and then loop { std::thread::sleep(1) } forever. That way you could extract any information you need from the process (because it's not dead yet and the stack contains everything there was at the moment of panic) but it will not definitely do anything else. And in case of multithreaded code, the current thread wouldn't be overwriting any memory. Alternative solution would be to send signal 9 to self (that's POSIX way to killing current process without any cleanup) and loop forever while waiting the OS to complete the removal of the process.

And if you ever write panic!(...) in your code, that's the behavior you explicitly want. It would also match the logical behavior of panic() in Linux kernel.

If you want to save user data, you have to do snapshots while the system is still in known good state. It's not safe to wait for panic and then try to supposedly / hopefully avoid losing data.

Had panic!() been defined this way, nobody would ever try to use it for real "error handling". Instead, everybody would be returning Option or Result and using the "?" shorthand everywhere unless you truly want to panic for real.

The explicit "?" in many places would add some extra clutter to source code but it would add an explicit mark to every possible place that can fail.

For example, I'd prefer that instead of a = b * c; I would need to write a = b * c?; (or maybe the syntax should be a = b *? c;?) to be explicit about the fact that the multiplication may overflow and the caller must handle that case. Of course, if I want to handle that case, I cannot use the shorthand. I think the C++ mechanism where literally anything can throw an exception is really powerful when used correctly (basically RAII everywhere) but without explicit markup in the code, you can never have any idea how many things can throw if you're reading the code written by somebody else. With explicit markup, seeing e.g. 13 "?" shorthands in a couple of lines of code, you'd be instantly aware that the code is probably very error prone.

Of course, with the type system one could have been used to add a lot of optimizations. For example, the system could have had SmallI32 type which would have been stored as i32 but it's known to be small enough to not overflow when multiplied with another SmallI32. One could write a = b * c; where b and c are of type SmallI32 and a would be i32 without any runtime checking. I think this idea would be very similar to NonZeroU32.

With such a design, everything that even potentially needs to allocate RAM or do any other stuff that might potentially fail would need to return Result instead of returning something else and more or less randomly panic!()ing.

However, I believe this could have been implemented without a huge overhead because the compiler would be aware of this pattern and basically do the same thing as corresponding well written C code would do: run the code and check the return value. Then unwrap() could have been unsafe and it would have always returned the Ok value (or the memory range that would have contained the Ok value if everything actually was successful) so it could have been used in performance critical code where it's somehow already known that the call cannot possibly fail. Of course, if compiler could compute at compile time that the function call cannot possibly fail, the error handling and error checking would naturally be totally optimized away without using any unsafe code so using Result instead of pure data as return value would have caused zero overhead whenever possible.

And it would have been great if Err could contain full stack trace of the code location that emitted it but that might make sense for debug builds only to avoid having too much overhead for all use of Result type.