My understanding is that UB is bad because literally anything could happen according to the specification, which results in severe problems in the following cases:
You are trying to track down a bug and the program is doing something that's seemingly not even written in the source code. You are now debugging the compiler, not your program.
You are trying to release secure software that will do exactly what your code says it does, eliminating a large class of security vulnerabilities.
You have strict data integrity requirements and must reduce the possibility of data being corrupted.
Your software is in any way a safeguard against something physically unsafe happening.
But what if you are running in an environment where you don't care about these things? I.E. you are running a release build and can switch back to a debug build to get predictable behavior to diagnose an issue, you are not receiving any input from untrusted sources, and the data you are operating on can be scrubbed and reset from a backup at any moment with no consequences. Would significant problems still arise from allowing UB in this context, and this context only? The motivation is that there may be some small performance gain from eliminating things like bounds checks on random array accesses and possible panics on unwrap and friends.
I'm going to be talking about Rust in front of a bunch of C++ programmers next week, and our job is to make a product that works exactly in one of these contexts. More specifically, we make something which receives data from a trusted server (which itself does not communicate with any untrusted sources) and renders a visualization of it onto a screen. It would be cool if I could tell them that you can have simple expectations in debug mode while paying no costs compared to the equivalent C++ code in a release build. Naturally, I wouldn't make that recommendation universal, as there are obvious problems outside this context. But am I missing a significant problem that would still arise in this particular context? I am struggling to think what might become worse if UB was allowed in this context.
The thing is, that isn't a given in the face of UB. For some things where the UB "disappears" during lowering, not doing optimizations will make behavior somewhat predictable if you know how the lowering is done and the lower level behaves, because nothing "exploits" the UB. But for something like use-after-free, UB could mean seeing arbitrary inconsistent data where you expect meaningful data. You can't always trust your debugger in the face of UB, because what you've told the code to do is a contradiction that can't be true. Even if you aren't doing any optimizations and are just naively blitting bad, redundant machine code from high level code.
This is why I specify that this would only be considered for the released product. As far as I am aware, if enabling UB leads to something unexpected happening, it will by definition be caught in stable Rust where all checks are enabled, because the alternative is that stable rust would also allow the UB to happen.
To say it in another way, I am considering the possibility of using stable Rust as normal in debug builds, but then turning off its runtime checks in release builds. When debugging, we would always use the debug build, and can thus trust what we're seeing in that context. Issues that crop up due to UB in release should always trigger some defined assertion in a debug build where the checks are reinstated.
The runtime bounds checking and assertions are part of what prevent UB -- they cause a panic rather than UB. If you disable them in release mode you will simply allow UB to occur rather than getting a panic. And you can't rely on testing in debug mode to uncover this before hand, because testing never covers all possibilities, plus not every known panic is fixed before releasing software.
I am saying this can be caught the other way around - an issue is found in release, and we replicate it in a debug build to see what is causing it. I am not under the impression that all bugs will be caught during testing.
That is one approach, but it is a very dangerous one. If you allow UB to occur in production, you're risking that anything might happen. All data could be lost, for example, with a very high cost (maybe you're now out of business). The risk is not just that you can't diagnose the problem without using a debug build to reproduce it.
But even in debug mode, diagnosing the problem is extremely difficult in some situations, because the UB that occurs in production may give you no hints at all about how to reproduce it. This is not uncommon in my experience.
This is true of bugs in general. Crashing with a backtrace can also give no clues as to how to reproduce an issue, e.g. triggering a panic which was thought to be unreachable. The only way to figure out how to reproduce a bug is to try doing things with the software until you find a minimal set of actions which causes something unpredictable to happen.
In that situation there is nothing wrong with doing anything you'd like to improve performance, since there couldn't possibly be anything you could do that would have a negative impact.
Are you really asking whether there is a way to turn off the runtime checks in Rust? I'm pretty certain the answer is no. Zig allows that, or it does for almost all runtime checks.
Here's the way to disable runtime checks in Zig if you're interested.
I was initially asking if there were consequences I was missing, but given that there would be no consequences, my immediate next question would indeed be if there would be a way to turn off Rust's runtime checks.
What confused me is that you defined the situation such that no bug or UB could have a negative impact. So there is nothing you're missing, by definition.
The initial question as posed is a bit circular (or tautological). Does UB matter when the stakes are so low that it doesn't matter? No. I think your second post maybe gets closer to the heart of what you are asking, namely
I am not aware of an easy way to toggle bounds checks on or off, certainly not by switching between release vs debug builds. You'd have to manually write unsafe code everywhere to opt out of the built-in bounds checks, and then to have bounds checking in debug builds you'd have to write them back in yourself probably with debug_asserts everywhere and make sure to not miss any spots. It's not nearly the panacea the quoted paragraph makes it sound.
UB really means "undefined as far as Rust is concerned". It may be that your specific environment is some kind of sand-box that limits the effects of UB rendering it no longer "undefined" in a non-Rust meaning. Most operating systems and operating environments will seek to contain or restrict what a program can do, but it is undefined as far as Rust is concerned.
As for disabling index checks, that can generally be done using unsafe assertions. Here is an example:
You can gate this with a feature ( which may or may not be enabled by default ). the macro unsafe_assert! is defined here
as
/// In debug mode or feature unsafe_opt not enabled, same as debug_assert! otherwise unsafe compiler hint.
#[cfg(any(debug_assertions, not(feature = "unsafe-optim")))]
macro_rules! unsafe_assert {
( $cond: expr ) => {
debug_assert!($cond)
};
}
/// In debug mode or feature unsafe_opt not enabled, same as debug_assert! otherwise unsafe compiler hint.
#[cfg(all(not(debug_assertions), feature = "unsafe-optim"))]
macro_rules! unsafe_assert {
( $cond: expr ) => {
if !$cond {
unsafe { std::hint::unreachable_unchecked() }
}
};
}
That's unfortunate, I was hoping there would be some switch in the compiler.
I originally felt that my requirements would definitely make UB a non-issue, but after doing some Googling, I found that no one seems to be bringing this concept up. This made me wonder if I was missing something.
Because you can't trust that the signals you get from something hitting UB. The UB is allowed to send false signals of a problem where none exists, or to not tell you about a problem that you think you instrumented.
The problem with UB is that it prevents you from reasoning about the behaviour of the program. And I don't think there's a single case of a program you care about where you don't do that. If you're trying to visualize something, UB means it can show the wrong thing with no way to know that's happening.
No, you can't just have UB and claim it's fine. It's not.
I think in this case it would still be termed undefined (we might not know what will happen), it's just that in this context, the fact that we don't know what might happen is not as big an issue as it usually is.
I believe the situation I have outlined to be one in which there are no serious consequences to UB in release. I am also suggesting that the checks all remain enabled in debug mode, as we still have the serious consequence there of UB being hard to debug regardless of the context.
My question is whether or not I have missed a serious consequence of UB which may still appear if the other consequences I have mentioned are a non-issue. If you are aware of one, I would be interested in hearing it.
The only way to know whether UB could have an impact for your situation is to know everything about it, not just the things you mentioned. Could an infinite loop prevent something from displaying on the screen, causing the person watching to make an incorrect decision that has repercussions in the real world? The question you're asking seems unanswerable.
To provide some additional justification for this line of reasoning, I believe this set of requirements to actually be applicable to a fair number of real life situations. For example:
All modern operating systems have ways of restricting access to parts of the file system. If an application can request to the operating system that it be put in a sandbox where it is only allowed to touch non-critical files, then this is a situation where data loss is a non-issue. For example, this could be used in a game - a place where performance is often critical. The game can be reinstalled at any moment, with the only valuable data being the save data from the user. Simply wrapping the game in a shell that backs up the save data after the game exits would be enough to eliminate this concern entirely.
Similarly, this could be applied to robotics. Performance is again critical when working with low-powered embedded systems. If the control software is only allowed to communicate through another process which implements safeguards against physical unsafety (naturally a situation where disabling the checks would not be allowed), then there is no way for undefined behavior to cause serious consequences. This is even a setup you would want anyway even without the possibility of undefined behavior, as a good old classic bug could still cause a robotic welder to smack someone in the face.