However there do exist some runtime checks that, while potentially helpful, are simply too expensive for non-critical code that needs to run fast. So it's good that Rust provides debug_assert next to assert. However, it is in general difficult to guess the performance cost of something. The reliable way is to measure.
Applied to asserts, what is the recommended way to measure how much time is taken up by non-debug asserts in production code? If there was a way to disable all asserts (but that does not seem to be possible), it would be easy to run a benchmark. But perhaps there is some other way (a profiler, say) that allows to obtain that information? Or are people who are concerned about the runtime cost of assertions best served by using their own custom assert macros?
unless the expression you're checking is expensive to evaluate of course
Well, yes, that's what I had in mind...
Thanks for your replies. I know that asserts themselves are quite cheap and that they can help compilers rule out certain cases.
But, as I tried to express, the cases that I am concerned about here are those that are expensive, typically because the checks involve some complicated computation or occur in an inner loop:
Just like with other performance questions, the extent of the slowdown by asserts is not easy to guess without measuring. Consequently, C++, for example, offers several "levels" of asserts that can be disabled selectively: Reddit - Dive into anything
Since Rust does not offer something similar, I thought that maybe some other mechanism exists to measure the performance impact of asserts.
You probably want to be analyzing the effect of particular asserts, rather than the blunt instrument of switching them all off together. That’s not really any different than any other benchmarking problem, so you should be able to use tools like criterion to investigate.
Thanks for the pointer to criterion! I think that both approaches are complementary. If one can measure that in typical use the performance impact of all asserts is less than 5%, say, in many cases one will not bother too look into more detail.
That's a fair point. Some asserts in Rust are necessary to enforce the safety of unsafe blocks. So they are equivalent to bounds checks that also cannot be disabled.
I would therefore argue that there are naturally three classes of asserts:
Essential checks necessary to enforce code safety (equivalent to bounds checks).
Useful consistency checks that are good to have in production builds but not necessary for what Rust defines as safety. These should be enabled unless their performance cost is too high (that depends on the application).
Optional debug checks that are useful but clearly too expensive to remain enabled all the time.
The last class is covered by debug_assert and the decision has been made in Rust for assert to correspond to the first class.
So it looks like there's in principle room for a third class of asserts in Rust, but since I seem to be the first to bring up this issue, it doesn't seem to be perceived as a pressing problem.
Some of the assert costs aren't the asserted predicate itself but the panic message. E.g. in bounds-checks the invalid index is part of the message which means it must be able to calculate the index before jumping to the panic handler and that calculation in turn can inhibit some optimizations because the error paths can't be merged.
Compiling with panic=abort and -Z build-std -Zbuild-std-features=panic_immediate_abort could provide a comparison point. But that also disables unwinding, so it's not apples-to-apples.
Disabling assertions in a specific crate could somewhat sidestep the safety issue. Also, if the goal is to see the effects of assertions on performance, then safety critical assertions need to be examined just as thoroughly - if they slow the code down, perhaps the safety invariant they uphold could be enforced in another way.
Remember that benchmarking UB is pointless, though. The reason that memory safety is so important is that any execution path that's going to hit UB in the future is allowed to disappear, and thus it's impossible to measure or debug reliably.
This is why the right thing to do is just use a profiler. That's what they're for. asserts are just cost like any other, and if they don't show up on the profiler then you don't worry about it. Turning off all the asserts is premature optimization.
Like how checked indexing is essentially free at runtime (thank you, branch predictors), most asserts will disappear entirely, perf-wise. So concentrate on the one or two in the tight loops that actually matter.
And as mentioned, seemingly extra checks can actually make stuff faster, like in
It's LLVM. It's easier to do optimizations that know what values are in particular ranges in LLVM's SSA form.
Rust can do it for arrays, though, since then the length is a constant too:
So there are other ways to write the same kind of thing that let Rust do more of it, like
Since that way the indexing is happening on a long-enough array, rather than a slice, and the optimization is simpler.
(Of course, there's also just x[..3].iter().sum() to avoid the per-element indexing altogether, by using something where the library does the indexing for you.)