Panicking: Should you avoid it?

I agree with others that your question is a little under-specified. But this question, and many similar ones like it, is why I wrote this blog: Using unwrap() in Rust is Okay - Andrew Gallant's Blog

(From what I can tell, it seems pretty consistent with the advice you've gotten so far. But it spells it out in more detail and with real world examples.)


Heyo everyone, thank you for the replies! I will have to think on this.

In response to the "what do you mean by avoiding?" Questions, I apologize. I am a beginner and will be prone to these mistakes as I only barely know what I'm talking about lol.

To answer, in my initial question I meant "avoiding programmer errors that cause panicking", to quote jbe

As others have said, panics should be reserved for unrecoverable bugs. So your question becomes "how do I avoid bugs". Most commonly you write tests. Ordinary unit and integration tests are the most common ones.

Fuzzing can help to automatically generate random inputs, which are quite likely to hit various unlikely edge cases. Different fuzzing techniques typically hit different edge cases, and not every problem is amenable to fuzzing (e.g. if you have some internal consistency checks on the data, like a checksum, then you won't normally generate the interesting cases).

Property testing is easier to use and to understand than general fuzzing. I commonly write proptests instead of fixed unit tests. Generating the data may require implementing the Arbitrary trait, or composing the proper data pipeline.

Formal verification is harder to use, but can give you stronger guarantees. I have some mildly positive experience eliminating runtime panics with kani model checker. Prusti is another popular alternative. Generally, different tools have different limitations, so you should try and find the one that works best for you.


I'm wondering what you mean there? Surely if the code is "thoroughly reviewed" we would expect it not to have UB, that is the point of the review right? And if it still does a panic would be nice, rather than random behaviour. I can't imagine accepting UB as being OK in any circumstance.


It is ok to panic on unhandleable errors, but usually not ok when you can actually handle your error.

P.S. Looks like my reply is pretty useless…

My impression is that if you have an unhandleable error, for example an out of memory error, then it's best to quit immediately. Things are out of your control. That is a panic.

Many other panics show you have a bug in your code, for example, a divide by zero error, an out of bounds array access, in which case it's better to quit immediately with a panic. Then you know you have a bug to fix and where it is happening.

When thinking about errors it's useful to forget about the error free "happy path", think about all the error/failure paths, design your program to respond to them as you want. Those error paths deserve as much, perhaps more, attention in the design of your program than the job you actually want to do.

To your actual question. Consider opening a file for reading that does not exist. Something like fs::read_to_string() returns a Result which may contain an Error. That error maybe fatal, say it is trying to read a required configuration file, so perhaps best to check for that error, print a suitable message and exit the program. Not with a panic though. Or it might be recoverable, say the user typed a file name incorrectly, so you may want to prompt for a new file name and try again.


Note the "in case of a programming error", i.e. the review should avoid a programming error, but by writing unsafe, we tell the compiler that in case of a programming error, we will cause UB.

That's what I meant. Not that UB is good or should happen. But if we use unsafe, it may happen in case of programming errors (e.g. instead of panics).

Isn't that just definition of unsafe? Code should be marked as unsafe when mistakes in it lead to UB (which is by definition worse than panic because it's unpredictable) — but as a compensation you get “superpowers”.

1 Like

To be more precise, this only applies to unsafe { … } blocks as well as unsafe impl, not the uses of the same token “unsafe” with the dual meaning, i.e. unsafe fn[1] and unsafe trait.

Also, it’s usually not true that the code where mistakes can lead to UB is limited to the code inside of unsafe blocks (or unsafe impls); commonly the unsafe block is only used directly where the “superpowers” are needed, whereas non-unsafe code around it – or in other places – can be relevant for safety of those unsafe blocks, too. This can even apply e.g. to (non-unsafe) standard library code… one might write an unsafe block whose soundness might rely on the well-behavedness (i.e. bug-freeness, even if those bugs were just “logic errors” and not direct UB in and by themselves) of standard library collection types… by using a Vec or VecDeque or HashMap and expecting those to be well-behaved.

  1. well… those do both, wrapping their bodies into an implicit unsafe { … } block, too ↩︎


I am also new to Rust and programming is just a hobby :slight_smile:
So, my answer may not be correct from practical point of programming, but my 2 cents would be:
Don't panic if you can return error.

Most of time everything depends at your situation and application.

Here where few answers who said you should panic if you try to get out of index memory. But I would say you don't need to panic, you need to return Option::None. Rust have very good data type std::option, so if it is possible to have out of index situation you just wrap everything in Option and return None on error.
Or you can return std::result if here can be more different errors, not just out of index.

In this case code user will need to make a decision to panic, fix problem or to return std::result with error to higher caller.

panic also close your application instantly, but your application may be able to save user data before closing in this case panicking makes user lose data. It is very annoying. I know. I use AutoCAD and it often close for now reason and I lose half day work. Right now they changed a little bit and more often lets save drawing before closing application.

Even out of memory problems sometimes can be solved, depending at application. For example you have multi document application like AutoCAD. One document take 1GB of RAM and PC have 8GB RAM. So, if your application opens 8 documents and will try to open 9 document you will get out of memory error, but in this situation it would be stupid to panic and close application losing all not saved data of other 8 documents. It would be much better to just abort opening or creating new document. Because here are nothing wrong with other documents, you just don't have memory for new document. If user saves and closes one or more of documents you can open new document.

So, it mostly depends if your application can fix problem or can ignore it. If your application is multi document application, is problem related to specific document or to whole application and many more questions.
Everything comes to architecture of your application.

1 Like

Wise words. Worth far more than 2 cents.

1 Like

I have read "the Book", "rust By Example" and many other books on internet and in one of them author wrote.
Use panic only in Tests, your release code should have 0 panic. Your test codes can have as many panics as you want/need.

How I view panic in release code:
You work with knife. To you come your client and accidentally he cut his finger of. You don't know how attach his finger or stop bleeding. So, you panic, take your gun and shoot your client in his head. But just outside your shop here was medics who could not only stop bleeding, but even attach his finger back. But you come to decision, if you can't fix problem, no one can fix it, so you just killed him.

Don't take responsibility, return it to someone who made order. He may be able to fix it. You only responsible for your job, if you can do job with data client give you just say it to client, don't kill client because of bad data.

I mostly agree.

To my mind if I do something like index off the end of an array that is a bug in my program. Surely I did not intend or expect to. That bug needs fixing. My program is now in an indeterminate state. Seems better to have a panic! kill the thing instantly. Likewise for other bugs that cause panic!.

I will claim that every run of my program is a test. Even if it is by an unknown user, in unknown circumstances, years later. It's another test case. And so if you say "Use panic only in tests", that supports my above argument.

No doubt one can contrive circumstances where that is not true. But as you say " everything depends at your situation and application".

In case of indexing std::vec::Vec is a very good example.

You make two or more indexing methods, one fast who panic and one or more who return option.

In this case user use method who can panic, but he is 100% guaranteed his code will not panic

let v = vec![0, 2, 4, 6];
let mut index = 0;

while index < v.len() {
    println!("{}", v[index]);
    index = index +1;

If user can't be guaranteed his code will not be out of index he will use get() method and will get option and will be able to check if he has data or None. In both case here will not be panic even if first can panic. So, you don't kill someone who can be saved.

If you only use get() method and never use it will be impossible for your code to panic even if you try to get out of index value. So, here really is no reason to panic.

If you for some reason use believing you will never go out of index, but you do go out of index, this panic is not intentional panic.

So, if you are using vec you have a choice to use who can panic, but if you only use it only n place where you are 110% guaranteed to not get out of index it will never panic. If you don't have this guaranty you should use get() method and check result. You again are guaranteed to not panic.
If you are making your own struct with indexing you to implement both methods and user can make decision to use one who can panic, or one who don't panic. In the end it is users choice not yours.

I agree.

Thing is, if I choose to use [ ] because I "know" the index will never go out of range, then if it does that is a bug and a panic! is appropriate. May be I "knew" it was OK because of the way I calculate the index but turns out I was wrong.

I'm most likely to use [ ] because get() is long winded and clumsy and generally I "know" what I want is correct! And I don't want to mess with with looking into the returned Option to see what happened.

1 Like

This is really bad and opposite Rust philosophy. And very, very bad practice. Because you are practically saying you are too lazy to check if your application works well and don't care if it will crush.
If you where developing autopilot for automobile and because of this your philosophy your autopilot panicked and stopped working while driving car at 100 km/h and it crushed in a bus you would need to take responsibility for it. If not real, but at last mental responsibility of knowing that your laziness killed many people.

If you are developing some notepad or something similar here will not be much responsibility if your application crush, but you will risk of going bankrupt if someone else takes his job seriously and will make alternative application which don't crash for no reason and your users will move to his application.

I personally would use only in simple like example I have given and for most other cases would use get(), because I really don't want my application to panic. Especially if it don't even need to panic because I can use get()

Attitudes need to change a bit when dealing with life safety, sure. But even then, panicing is often the better choice: If your program encounters an unidentified system fault, there's no guarantee that attempting to continue operation won't make the underlying problem worse. Crashing cleanly and letting a completely separate monitor/failsafe system decide how to continue is one of the more robust ways to handle these sorts of unexpected error.

It's unclear which of these two hypothetical companies will be able to produce the most reliable software over time. The one that panics at the first detection of an abnormal condition will annoy users more in the short term, but may be in a better position to diagnose and fix the problems that arise: If you try to handle an unexpected error, there's a significant chance you'll do it incorrectly¹ and have some kind of corrupted state going forward, which can produce subtler, harder to debug faults.

¹ Because the error is unexpected, any handling of it must necessarily involve an educated guess about the root cause. We can't expect these guesses to be correct 100% of the time.


True, but I don't mean to continue, I mean to not panic.
If here are problem, you return error to caller and he makes decision how to fix it. If it can't fix it would return error up the leader.
At some point, it will possibly to decide if application needs to drop command and return to state before command was called. If it needs to save data and close application or other possible solution.
And this decision should be taken by code who knows more about operation and not vector.

So, I think panic should be use as little as possible.

This thread question is: Panicking: Should you avoid it?
And answer is big YES. As much as possible. If possible you should not use panic at all. But this very depends at your application architecture and hardware.

In 2010 on AutoCAD I had horrible memory leak. It would even crash windows after using all RAM.

This is absolutely the case, in my experience. I used to work at a place that had previously just HRESULTed most everything, and just propagated the errors up to the caller. That meant things were often broken, but with no way to track down where anything went bad, and there was a ton of never-exercised error handling code that probably didn't work.

It changed to using an CRASH_UNLESS macro, and reliability and productivity increased. Less pointless error handling code was being written for stuff that never happened. When something did go wrong it couldn't just be a suppressed assert, it had to be fixed. And that fixing was easier because it crashed right at the problem, so one could get memory dumps and such showing program state.

If you, as the programmer, think something can't happen, assert it with a panic. Then if you're wrong, go fix the code to handle that situation.


As with all things: it depends. Anyone who gives an answer without qualification of in what cases they're considering is at best overly confident at extrapolating their own experience.

BurntSushi's guideline is what I'll typically and happily point people at. Below is generally how I think about panicking. Warning: contains strained analogy of program tasks as ships.

To reiterate what's been said a couple times, the first guideline would be if it's relatively simple, generally prefer returning a meaningful Option or Result to panicking. This passes the question of how to handle the problem to the caller who likely has more context than you do about what that failure means. Or at least, they have a bigger picture view along with whatever context you gave them as the Result::Err case. This of course highly depends on the exact API you're implementing what this best should look like, but -> Result works fairly well for any high-level atomic operation, where either it happened successfully or it didn't happen and you can report what prevented it from happening.

The counterpoint to the above (as a category, generally "application error") is programmer error (also "logic error"), where the program has entered an unexpected state. Knowing nothing else, generally the best thing to do in this case is to panic!. If you've been asked to do something which doesn't make sense (e.g. index an object where there is no object at that index), a panic! is in essence a "controlled crash;" depending on application configuration, this should safely take down at least the task which panicked, along with perhaps the thread or entire application. If there's no way to accomplish what's been asked, panicking is the correct outcome.

The alternative which this controlled crash exists to prevent is the result of just attempting to do the thing which doesn't make sense anyway, producing at best unpredictable behavior as the program explores unintended code paths, and at worst the UB of doing things you've promised the compiler, optimizer, and virtual machine you'll never do, and at which point even the debugger may lie to you, because literally all bets are off. You don't want to visit the realm of UB, because nothing makes sense there.

What a panic! means depends on the application. A panic will typically display some sort of debugging message (this behavior is controlled by the panic hook and typically prints a message and optional stack trace to stderr) and the application gets to choose between -Cpanic=abort, in which case a panic then exists/aborts the program, and -Cpanic=unwind (the default), where the stack is unwound similar to how exceptions work in other languages, running destructors along the way to perform cleanup. When using -Cpanic=unwind, the application can use catch_unwind to terminate the unwinding process and do some sort of high task-level handling of the panic, such as logging the failure of that task and moving on to the next. If the application doesn't do this, an unwind takes down the thread, and the application as well if (and only if) that was the main thread.

For a library, then, you should always strive to provide an API that can be used without risking panics. Panicking APIs are often an ergonomic desire — for relatively simple preconditions like "index is in bounds," it is much simpler[1] not to have to continuously say "no, it's in bounds, I promise, I checked." But as a library, you should always offer some way at least to run a pre-check (where that's reasonable and doesn't suffer from <abbr title=time of check, time of use">TOCTOU issues) and ideally to get back Result or Option where more it's more complicated checks along the way.

On the other hand, you should always strive to avoid returning a default value indeterminable from a successful result. Program design isn't a game of social standing; you don't need to pretend nothing went wrong when you know it did. Nothing is worse than a program saying it's succeeded and hiding the fact that it didn't. At a high application-level loop, it makes sense to take failed tasks, discard them, and move on to the next thing. At any level other than that executor, minimally give your caller the chance to react to the fact you weren't able to accomplish the task you were told to do.

The thing about a panic! is that you're saying that the best response to what's gone wrong is to, well, panic, and just give up on whatever was currently being done. This is absolutely fine in many cases; programs are giant balls of messy state, and it's frankly a miracle that they stay in a reasonably functional one most of the time. If there's no good way to continue doing what you're supposed to be doing, panic!king is the correct course of action, because it would be worse to continue on a sinking ship than to admit it's going down and save what you can. In this strained analogy, that would be by panicking/unwinding, running Drop handlers to clean up state, and allowing other tasks (ships?) to continue without corrupting their state as well. Of course, the other ships tasks need to be prepared for you to panic and not panic themselves when they see your SOS and whatever state shared resources have been left in — lock poisoning exists to protect the other ships tasks from seeing potentially corrupted state caused by you panicking, typically by causing them to panic as well (via the .lock().unwrap()).

But in other cases, an error isn't a panicking matter; it's within expected operating procedure; you should record it and move on. In these cases, panicking is often an overreaction. But also, if what you are is just a oneshot CLI application, then a quick panic and exit can be a functional way to handle most errors, because the user at the CLI is better equipped to handle whatever it is (so long as you give them sufficient context).

What matters as a takeaway is that it always depends. All else being equal, it's generally preferable to give your caller more options by giving them a Result, but the situations where panicking is appropriate abound; all else is rarely equal.

  1. With my paranoid curmudgeon hat on, if you provide the panicking option, that also lessens the likelihood some over-confident idiot will use the Option version and utilize unsafe to unwrap_unchecked and upgrade a logic error into a safety issue in the name of performance because "I checked that, I promise, no need to check my work." I've been that idiot before and will again — it's for this reason I highly recommend any O(1) checkable unsafe precondition to be checked when cfg(debug_assertions) internally to any _unchecked APIs. Ideally with a noisy :warning::boom: emoji-laden panic screaming that this would've been UB in a release build. ↩︎