Subtraction on unsigned numbers

Hi y'all, I just wanted to ask around if anyone knows why std::intrinsics::saturating_sub is not the default behaviour for all subtractions of unsigned numbers. It feels far more natural to me, because I normally don't want my program to panic and this behaviour seems reasonable. Are there performance benefits to the normal subtraction or what is the reason behind that decision?

Saturating is not usually the answer you want. Much more often, overflow is a bug. Panic is better because it allows early detection of bugs.

Plus yes, there are performance benefits of not saturating because CPUs don't usually have built in saturating arithmetic (SSE is an exception, but that's a special case).

4 Likes

That's interesting. I assumed intrinsics always map to CPU instruction (at least where available)...

Integer overflow is considered to be an error in Rust. Though opposed to C, it is never undefined behavior. Yet it may panic or simply return a wrong result. What happens depends on whether you compile with the debug or release profile. Compare:

In both cases, the following program is run:

fn main() {
    let mut x: u8 = 0;
    x -= 1;
    assert_eq!(x, 255);
}

With the release profile, it runs without errors (though you should not rely on that!), but in debug mode it panics. This is intentional. It allows the compiler to not care about overflowing with the release profile (as long as it's not causing undefined behavior).

But that also means you cannot rely on panics in case of overflows! If you need to rely on your code to panic, you must use the checked_sub method, for example:

fn main() {
    let mut x: u8 = 0;
    x = x.checked_sub(1).expect("underflow");
    println!("{x}");
}

(Playground with Release profile)

I see this differently. I want my programs to work, and to scream loudly if for some reason they don't. Getting a panic is far better than silently doing something I might have forgotten could happen.

For example, I really don't want a[i - j] to just always return a[0] if I actually meant to do a[j - i]. I'd much rather get the panic -- which I will, either from the overflow in debug or the out-of-bounds index in release -- so I find out that I got it wrong.

As for saturating specifically, it's a worse general "don't panic" behaviour because it's not associative. At least with wrapping you have (x + y) - zz + (y - z). It still might not do what you wanted, but at least you don't need to worry about useless-in-ℤ parentheses for it.

4 Likes

Saturating subtract feels like the integer version of NaN... The underlying assumptions you made when writing your arithmetic code is wrong (e.g. we always expect i to be greater than j), but instead of blowing up loudly to let the developer know they've got a bug, we silently "poison" all future calculations we make using that value and give you the wrong results.

2 Likes

The difference here is that most (all?) operations on NaN will produce NaN again, so it acts more like a deferred error check than silently giving wrong answers. Saturating unsigned math, on the other hand, doesn’t trap the ultimate result of a series of operations in the same way; it instead gives you a wrong answer that looks legitimate.

4 Likes

One exception I know about are x86 min/max instructions. And even their official documentation recommends to use something else if there is a chance of getting NaN!

It's just following the crazy IEEE-754 standard. The same is true for f64::max in Rust.

...and is why there's now f64::maximum in nightly, which doesn't swallow NANs.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.