Why numerical computation casting hell?

I've modified it a bit, no casts:

enum Direction { Left, Still, Right }

fn go_left_or_right() -> Direction {
    // Decide based on player input
    use Direction::*;
    if true {
        Left
    } else if false {
        Right
    } else {
        Still
    }
}

fn main() {
    use Direction::*;

    let mut pixels = [false; 10];
    let mut position: usize = 5;

    for _ in 1 .. 10 {
        pixels[position] = false;
        position += pixels.len();
        position = match go_left_or_right() {
            Left => position - 1,
            Still => position,
            Right => position + 1,
        };
        position %= pixels.len();

        pixels[position] = true;

        // Draw pixels!
        println!("Position is: {}", position);
    }
}

It's still worse than when I can manage to keep everything as isize, as opposed to usize.

Compare:

    /// ...
    x += keys.tri_horz();
    y += keys.tri_vert();

    x = (x + width) % width;
    y = (y + height) % height;
    // ...
    m.dot(x, y, colors[color_ix]);

It's much more readable than placing a match in the code.
I'm not worried about needing the range from 2^31 - 1 to 2^32 - 1 (I'm on the gba, we don't have that much ram to put anything in), and would be happy to have i32s for everything, but I still would need to cast whenever I interact with arrays at all, since everything is usize.

I'd be okay with implicit casts from usize to isize, and explicit casts the other way, given that underflow is much worse.

Would it be any better in C? Assuming all values are the same size (and at least int sized), (position + go_left_or_right() + pixels.len()) % pixels.len() would silently cast the -1 or 1 to unsigned, and then end up doing an unsigned modulo with a possibly underflowing left hand. So you need a cast anyway; Rust requires more casts, but at least it ensures you don't accidentally perform the wrong operation.

That said, as I've mentioned before, I think Rust should have not implicit casting, but an idiom of supporting operations on arbitrarily-sized integers in a mathematically correct way wherever possible. For example, vec.get(-1i32) and vec.get(1u64<<32) would compile, but they wouldn't be equivalent to vec.get(0xffffffff) and vec.get(0) respectively as in C; rather, Vec itself would implement Index for all integer types, and in this case check whether the mathematical integers -1 and 2^32 are within [0, length), and return None if not, just like any other invalid index. In practice this would be done with a checked cast.

The usize + isize -> usize case is interesting because casting either number to the other's type before the computation will not produce the right result with overflow checking. If the former is casted to isize, 0x7fffffff + 1 will report overflow despite the mathematical result being representable in a usize, while if the latter is casted to usize, 1 + -1 will overflow while 1 + -2 will not. It can be done by using wrapping math plus a manual overflow check, but the combination of .wrapping_add() and the slightly complex check requied clutters the code quite a bit for a relatively simple desired operation, and it's easy to get wrong. It would be both nicer and, I claim, better for correctness if Rust just allowed a usize to be added directly to an isize, with a polymorphic return type which could be either usize or isize, and automatically perform the correct check depending on the result type.

(I think this goes for operations on differently sized integers too, but I'm not sure what I think the exact rules should be.)

1 Like

I argued back than that we should have an Index trait, and parametrize on that, such that all types of indices work (including possible non primitive integers but user-defined integer types).

The main argument against it was something like "that is going to generate a ton of code for different types of indices".

1 Like