`debug_assert` and trivial conditions: `#[cfg(...)]` vs `cfg!(...)`

If you look at the implementation of the debug_assert! macro, we get the following:

macro_rules! debug_assert {
    ($($arg:tt)*) => (if cfg!(debug_assertions) { assert!($($arg)*); })
}

instead of:

#[cfg(debug_assertions)]
macro_rules! debug_assert {
    ($($arg:tt)*) => ({ assert!($($arg)*); })
}

#[cfg(not(debug_assertions))]
macro_rules! debug_assert {
    ($($arg:tt)*) => ( {} )
}

Why is that ?

Obviously I expect any program with if true { /* then_expr */ } else { /* else_expr */ } to be optimized by the compiler to /* then_expr */ and if false { /* then_expr */ } else { /* else_expr */ } to /* else_expr */.

However, to handle the case with optimizations turned completely off or even just to alleviate the job of the compiler, it seems that the definition using a double #[cfg(...)] is strictly better than the current if cfg!(...) definition.

Thoughts? Comments?

args in your second don't get parsed, so you could type anything (that is tt) and not see an error in release build.

2 Likes

I don't see that as being a problem.

The point of debug_assert! is writing conditionally compiled code; and the point of conditionally compiled code is that when the condition is not met, the code is not compiled and may thus not even be parsed.

There may even be a problem if, for instance, one were to define a conditionally compiled function / method / property only used in debug_asserts, and find out that since the current one always compiles the (unreachable) debug assert expression that his code does not compile.

Playground example (you may toggle comment/uncomment my own debug_assert overrides and toggle debug/release to test both cfg cases).

As a practical example, some monomorphized functions / methods called only in debug_assert! expressions could be stripped out of the resulting binary since they are unreachable and thus unused code.

I agree with your rewrite, but I also think that you've got an uphill battle getting that into libcore.

Turning them off at compile time is potentially good for correctness, but you can also rely on the optimizer stripping them out automatically. (If you're really concerned about binary size, you should be running a stripper on your produced binary anyway to catch things LLVM didn't.)