A question/remark on arithmetic operations

Hi.

One of the things that drew me to Rust is that it prevents crashes from NULL pointers, errors in allocation, etc. So I was a bit surprised when I discovered that something like

for i in 0 .. nr_items - 4 { ... }

panic!s when nr_items, an unsigned value, is less than 4.

So first a question: is there any way to remedy this? Casting to a signed type doesn't seem very desirable, but perhaps it's the only way.

Second: a slightly wild thought. Would it be possible to guarantee that this error wouldn't occur, and call it an error when the guarantee cannot be given? A simple option would be to automatically add a test around the for statement, skipping it when nr_items < 4 (if there is no information about the bounds on nr_items), but that would only help loops.

Another option would be a way to state that nr_items is at least 4 through the type system. If that is not guaranteed, the above construction fails at compile time instead of at run time. This

let mut nr_items: 4..100 = f(something);

would do the trick, or if nr_items was declared as a usize,

match nr_items {
    4..100 => ... nr_items - 4 ...
    _ => ()
}

Then the panic! would only have to occur when a cast is made that violates the type's bounds. I know that inferring such bounds across computations is error prone, but it's possible in some cases; in others, the programmer would need to help the compiler. Just a thought.

Remember, a panic is very different than the kinds of crashes you're talking about: panic still ensures memory safety.

Casting to a signed type doesn't seem very desirable, but perhaps it's the only way.

Well, your other suggestions, which are good ones, sort of boil down to 'casting to another type', the question is just what type and how.

Another option would be a way to state that nr_items is at least 4 through the type system

Yeah, this is really cool, but dependent types are a bit out of range for Rust. Maybe someday...

There are things far simpler than dependent types that can still solve most practical problems... See F*.

Right, they work for the simplest cases, but for more complex ones, it ends up being neccesary.

The panic is from arithmetic overflow checking, which is only enabled in debug builds. This means that there is no checking overhead, but also that this bug passes silently in release mode.

Arithmetic overflow is always a bug rust tries to prevent by these pragmatic means. If you want "regular" wrapping unsigned subtraction, just use nr_items.wrapping_sub(4) (it has no overhead).

I'm not at all as good at avoiding off-by-one errors as I'd like to think, so arithmetic overflow checking has saved me countless times. Makes it easier to find those bugs quickly.

The Rust panic from arithmetic overflow is one of the several design decisions Rust has got right or right enough. Arithmetic overflow testing is like array bounds testing, it has to be active on default in debug builds, it makes Rust a more civilized language.

Yes, I'm surprised how well it works.

I want to underline here for new rusties reading the thread that array bounds checks are enabled in all compilation modes, unlike arithmetic overflow checking.

Sure, but if certain guarantees could be made at compile time, that would even be better, IMO.

Ranged integral types have been proposed but considered brain-melting.

Ranged integral types are the only integral types available in the language Ada. The standard library type Integer is simply a type Integer is -2**32..2**32; definition. But in Ada the thought wasn't thought all the way to the end. How great would it be if the result of an addition of a 5..10 and a 15..17 type would be a 20..27 type at compile-time (and also all these operations would typecheck). But even cooler would be if x/y would only typecheck if y's type does not contain 0. Or multiplications would only typecheck if the multiplications of the upper bounds of the types being multiplied doesn't overflow.

Giving a compile-time range to all Rust integrals, and managing such intervals in a smart way, is something that needs to happen in Rust. I hope not having them since Rust V.1.0 will not not cause troubles.

We can probably implement them in a crate once we get Constants that depend on type parameters in generic code.

Constants as type values is a nice feature, but you don't create a very good language like Rust aggregating more and more features (that's how you create C++/D). You first need a more global vision. Implementing "visions" is harder and requires more work, but the end result is something that opens better ways to write code, to avoid bugs, to optimize.

Use saturating_sub:

fn main() {
    let num: u8 = 3;
    println!("{}", num.saturating_sub(4));
}

Perhaps that proposal was a bit too complex. I say that the programmer must have to put in some effort too, and that such compile time checks could be limited to a special integral range type, r(a..b).

The type of an integral constant c would be r(c..c+1), the type of a range a..b would be r(a..b). Simple comparison (x > a => x is of type r(a..) intersected with its own type), arithmetic operations (r(a..b) + r(c..d) => r(a+c..b+d)), min, max, etc., can be inferred easily. Some things cannot. E.g., if i has an explicit range, i = i + 1 could/should generate a warning or error, which can be overcome by a guarantee from the programmer that i is at most its range upperbound minus 2, explicitly, e.g. like this

let mut i: 0 .. 256 = 0;
...
i = (i as 0 .. 254) + 1;

or implicitly

let mut i: 0 .. 256 = 0;
loop {
    ...
    if i == 255 { break; }
    i = i + 1;
}

If the inference mechanism isn't clever enough, or the constructions are too complex to analyze, the programmer needs to add casts or another syntactic form of a guarantee, which then must be checked at run time. That could keep certain panic!s away, and possibly open up new optimization possibilities.

Anyway, it was just a thought. I'm going to try the other options mentioned in this thread for my learning projects. Thanks!