I have a lot of functions like this and also use them when i am 'sure' there are no bounds errors.
However I would like to have some assertions in "paranoid" mode.
A debug_assert but not always in debug mode because this would slow down debugging too much. What is the simplest (or best) way to achieve that?
It could be done with a global const PARANOID = true and trusting the compiler to eliminate the dead code, but I suspect that is not the way to go.
To comfort you: for all dangerous functions I have a safe version as well, which does the bounds checking.
Quinedot's reply is about how to have debug assertions but also optimizations on at the same time. But that's not how I interpreted your question. Define a paranoidfeature in your crate, and mark your assertions as:
#[cfg(feature = "paranoid")]
assert!(foo);
Run your code with cargo run --features paranoid.
Using this technique, you can even add whole extra fields to your structs that are only used for paranoid assertions.
As a different direction, rather than have "dangerous" functions that you only use when you're sure it's safe to do so, it can be more effective to use expect and look to see if the compiler can optimize out the bounds checks etc.
So, instead of your get_slice_mut_unchecked, you'd have:
Then, you can use strings and grep (or your favourite equivalent tools to check the release build binary for "get_slice_mut_expect failed to elide bounds check"; if the compiler can prove that the bounds check is unnecessary in all cases, it'll optimize it out, and neither the call to expect nor the string given to expect as a message will appear in your output binary.
If the compiler can't prove that it's not needed, then it'll appear, and you now know that you need to track down exactly when the compiler can't prove that the bounds check (or whatever) is not needed; when I've used this technique, I've tended to use a "divide-and-conquer" method, where I have multiple get_slice_mut_expect_n functions, and I try to identify which callers are problematic.
The check with strings and/or grep is something that you can integrate with CI easily enough, so it's possible to stop yourself introducing cases where the compiler used to be able to elide all bounds checks, but can't any more. And because it's mechanically checked for safety, if you ever make a mistake, the worst that happens is a nice clean panic at a point you never thought one could happen, instead of a "miscompilation" and a misbehaving program.
The "paranoid feature" I get. Thanks. I'll play around with it.
Some paranoia I will probably build in.
The "expect" technique I do not understand entirely, but I get the idea. As I understand we analyze the compiled situation after a release build, which I am not capable to do (yet).
Thank you, this is a technique I didn't know about! It will allow me to remove unsafe in several places.
Example?
In general I don't like bounds checking when I do not ask for it (a leftover trauma from C#, where it is nearly impossible to avoid).
So I chose the most unsafe way, expecting it to be the fastest, but creating undefined behaviour danger. Which is much worse than a panic.
When working with these 2d mapping (and images) for my little game I always use an internal clipping routine to ensure all bounds and go insanely unsafe after that.
if let Some(clip) = perform_clip_before_operation(...)
{
// go completely unsafe, trusting the clip routine and my hopefully bugfree algorithm.
}
The clip operation is in fact a programmed bounds check, which is needed anyway.
I guess the compiler could never proof to eliminate bounds checks.
This is something you should definitely benchmark. Bounds-checking in Rust is cheap, and the compiler will eliminate bounds checks if it can show that you're staying in bounds; the chances are very high that if you're seeing significant cost from bounds checks, you can fix it with a well-placed assert!(something < slice.len()); type of statement (to get the system to do one bounds check instead of many).
Chances are very good that you're simply making your life harder by manually eliding bounds checks, to no real gain in performance.
This is just coercing a reference to an array to a reference to a part of that array. Since the offsets and lengths are constants, and there is a compile-time assertion checking for containment (in the const block), no bounds checking is needed at runtime.
But because I'm paranoid about what the compiler optimizes (now and in the future) I use unsafe code: get_unchecked and unwrap_unchecked.
When I look at the code generated using godbolt I see that this code is compiled down so a single lea instruction, which is pretty good
example::sub_array::h05716a71fb74b112:
lea rax, [rdi + 3]
ret
it still compiles to a single instruction. So I don't need the unsafe code, and with the technique that @farnz suggested I can have a build that detects a change in the compiler causing it to no longer optimize away the bounds checks.
Also note that due to recent changes that inline small functions, I really don't need the #[inline(always)]. I think I can count on a single instruction being classified as small.
In addition to profiling or in some way being sure that the optimization is worthwhile, it's not a good idea to use unsafe without "proof" that it is actually safe. This is the Rust rule/convention.
The proof is in the form of comments explaining why the bounds check, etc, is unnecessary, such that a reviewer can logically verify it is correct in the same way one would check math for correctness. And since we are fallible, a test that exercises the unsafe code and runs under miri is also needed.
Without this, the biggest benefit (or one of them) of using Rust is lost.
More importantly, you have a build that also detects changes in your code that mean that the bounds checks are necessary for soundness. If you accidentally get the conditions in the assert wrong, for example, instead of executing UB with all the nastiness that entails, you have a panic for cases that are actually out of bounds.
Thanks for the example. I understand. That one is about completely 'const' of which I would expect it to be that well optimized. I also understand your remark now. Thanks.
Without this, the biggest benefit (or one of them) of using Rust is lost.
Think so too. I think this fact was triggering my thread anyway. "How to do it the best way."
I also feel a bit running around in premature optimization land.
One side-question about your code: how can i use/compile this const feature? And this try_into()?
Do you mean the const block or the const generics (params)?
The const block is a form of const expression I'm just starting to use, as it's pretty new and I only know a little about it. In this case it causes the expression to be evaluated at compile time, but I'm not the best person to describe the rules for this clearly.
Const generics are a way to pass a constant primitive (boolean, integer, array, maybe others) as a generic parameter in the same way you pass a lifetime or type. The function is monomorphized for each combination of const values that are actually passed, as well as types passed (but not lifetimes, which don't impact compiled code.)
try_into is a method of the TryInto trait. In sub_array I'm using it to convert a slice to an array ref, which is implemented for arrays. There are other conversions if you want to convert to an array value rather than an array ref. These conversions are implemented using TryFrom/TryInto rather than From/Into because they can fail if the slice length doesn't match the length of the array you specify. This is a runtime check, but as shown in godbolt, the compiler is sometimes able to determine that the runtime check isn't necessary.
I know quite some bit already about const generics.
I just cannot get your function to compile.
error: inline-const is experimental.
i tried to add #![feature(inline_const)] to my main.rs but does not compile.
maybe i am missing one of these magic rustc/rustup/nightly commands?
error: no method named try_into found for reference &[u8] in the current scope
ok i need use to used std::convert