Missed optimization after unrelated read

In this example (Godbolt):

pub fn reverse_xor(input: &[u8]) -> u8 {
    if input.is_empty() {
        return 0;
    }
    let mut i = input.len()-1;
    let mut result = 0u8;
    loop {
        result ^= input.get(i).expect("foo"); // comment me
        if i == 0 {
            break;
        }
        i -= 1;
        result ^= input.get(i).expect("bar"); // comment me
    }
    result
}

(This function doesn't make much sense, it's just a minimal example based on the actual code).

When compiling with -C opt-level=3, enabling either one of the result ^= ... lines results in a fully autovectorized, non-panicking function. However, when both of those lines are enabled, rustc seems unable to optimize away the panic in expect("foo").

This seems odd; rustc is able to optimize out each of the panics individually, but not both of them together. Any ideas why this is happening?

I thought I'd take a look at this behavior and I prepared to tinker with the expects by introducing a wrapper function:

pub fn reverse_xor(input: &[u8]) -> u8 {
    reverse_xor_impl(input)
}
pub fn reverse_xor_impl(input: &[u8]) -> u8 {
    // ... original code ...

and the compiler immediately replaced reverse_xor with:

example::reverse_xor:
        test    rsi, rsi
        je      .LBB0_1
        movzx   eax, byte ptr [rsi + rdi - 1]
        ret
.LBB0_1:
        xor     eax, eax
        ret

that is, returning the last element (the only one that's not XORed twice) if there is one, which is a valid implementation of the function. So, there's even more missed optimization than vectorization, and it's weirdly sensitive.

2 Likes

Wow that's odd. I realize the function didn't actually make sense as is (it's just my attempt at a minimal example of the issue) but this is even weirder, why does it only pick up that optimization when it's in a wrapper? I'm just trying to figure out what I can do to prevent situations like this that seem to block the optimizer.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.