Measuring the cost of non-debug asserts in production code


I am all for keeping assertions on in production code.

However there do exist some runtime checks that, while potentially helpful, are simply too expensive for non-critical code that needs to run fast. So it's good that Rust provides debug_assert next to assert. However, it is in general difficult to guess the performance cost of something. The reliable way is to measure.

Applied to asserts, what is the recommended way to measure how much time is taken up by non-debug asserts in production code? If there was a way to disable all asserts (but that does not seem to be possible), it would be easy to run a benchmark. But perhaps there is some other way (a profiler, say) that allows to obtain that information? Or are people who are concerned about the runtime cost of assertions best served by using their own custom assert macros?

The assertion macros just expand to a conditional that panics if the condition is false, there's no special overhead beyond evaluating the expression you're asserting on the happy path.

Generally unless you're hunting down a performance problem in a very hot loop or something assertions are not going to be a noticable performance loss[1]

You can use cargo-expand or the expand tool on the playground to see what code the macro expands to

fn main() {
    assert!(1 == 2);


use std::prelude::rust_2021::*;
extern crate std;
fn main() {
    if !(1 == 2) { ::core::panicking::panic("assertion failed: 1 == 2") };

  1. unless the expression you're checking is expensive to evaluate of course ↩︎

Run a sampling profiler and see what it points at?

But remember that assert! can actually made code faster, too, which is another reason to keep them on in production. Standard example:


unless the expression you're checking is expensive to evaluate of course

Well, yes, that's what I had in mind...

Thanks for your replies. I know that asserts themselves are quite cheap and that they can help compilers rule out certain cases.

But, as I tried to express, the cases that I am concerned about here are those that are expensive, typically because the checks involve some complicated computation or occur in an inner loop:

Just like with other performance questions, the extent of the slowdown by asserts is not easy to guess without measuring. Consequently, C++, for example, offers several "levels" of asserts that can be disabled selectively: Reddit - Dive into anything

Since Rust does not offer something similar, I thought that maybe some other mechanism exists to measure the performance impact of asserts.

You probably want to be analyzing the effect of particular asserts, rather than the blunt instrument of switching them all off together. That’s not really any different than any other benchmarking problem, so you should be able to use tools like criterion to investigate.

1 Like

You can trivially disable asserts by defining macro(s) with the same name before any mod items in your and/or, though you can't it for all crates at once.

for example

macro_rules! assert {
    ($($other:tt)*) => {

at the top of a completely disables any asserts in the library crate.


That will never be provided, because lots of code depends on assert! for safety checks. So turning off assets globally across a program would be incredibly unsound.

As an obvious example,


Thanks, that's a great idea. If I understand correctly it's even possible to create a crate for that purpose from which the no-op assert macros could be imported like that: use noop_asserts::*;.

It does not solve the problem that @scottmcm raises (that some asserts are essential for safety), but for the purpose of benchmarking I guess that's acceptable.

Thanks for the pointer to criterion! I think that both approaches are complementary. If one can measure that in typical use the performance impact of all asserts is less than 5%, say, in many cases one will not bother too look into more detail.

That's a fair point. Some asserts in Rust are necessary to enforce the safety of unsafe blocks. So they are equivalent to bounds checks that also cannot be disabled.

I would therefore argue that there are naturally three classes of asserts:

  • Essential checks necessary to enforce code safety (equivalent to bounds checks).

  • Useful consistency checks that are good to have in production builds but not necessary for what Rust defines as safety. These should be enabled unless their performance cost is too high (that depends on the application).

  • Optional debug checks that are useful but clearly too expensive to remain enabled all the time.

The last class is covered by debug_assert and the decision has been made in Rust for assert to correspond to the first class.

So it looks like there's in principle room for a third class of asserts in Rust, but since I seem to be the first to bring up this issue, it doesn't seem to be perceived as a pressing problem.

Some of the assert costs aren't the asserted predicate itself but the panic message. E.g. in bounds-checks the invalid index is part of the message which means it must be able to calculate the index before jumping to the panic handler and that calculation in turn can inhibit some optimizations because the error paths can't be merged.

Compiling with panic=abort and -Z build-std -Zbuild-std-features=panic_immediate_abort could provide a comparison point. But that also disables unwinding, so it's not apples-to-apples.

1 Like

Disabling assertions in a specific crate could somewhat sidestep the safety issue. Also, if the goal is to see the effects of assertions on performance, then safety critical assertions need to be examined just as thoroughly - if they slow the code down, perhaps the safety invariant they uphold could be enforced in another way.

Just out of curiosity -- when is this optimization performed? Is this a Rust or LLVM thing?

Remember that benchmarking UB is pointless, though. The reason that memory safety is so important is that any execution path that's going to hit UB in the future is allowed to disappear, and thus it's impossible to measure or debug reliably.

This is why the right thing to do is just use a profiler. That's what they're for. asserts are just cost like any other, and if they don't show up on the profiler then you don't worry about it. Turning off all the asserts is premature optimization.

Like how checked indexing is essentially free at runtime (thank you, branch predictors), most asserts will disappear entirely, perf-wise. So concentrate on the one or two in the tight loops that actually matter.

And as mentioned, seemingly extra checks can actually make stuff faster, like in

It's LLVM. It's easier to do optimizations that know what values are in particular ranges in LLVM's SSA form.

Rust can do it for arrays, though, since then the length is a constant too:

So there are other ways to write the same kind of thing that let Rust do more of it, like

Since that way the indexing is happening on a long-enough array, rather than a slice, and the optimization is simpler.

(Of course, there's also just x[..3].iter().sum() to avoid the per-element indexing altogether, by using something where the library does the indexing for you.)

1 Like

Benchmarking UB is pointless, but if the program given a certain input, doesn't hit an assert, then that same program won't hit a UB code path if the asserts are turned off (given the same input).

This is impossible to guarantee in the general case, but benchmarks are an exception to this rule.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.