The impact is usually second-order effects on things like optimizations. If the compiler doesn't know that a read is in-bounds, and thus control flow might stop, it's much harder for it to do optimizations like unrolling and vectorization. (LLVM, especially, is generally unwilling to optimize loops with more than one exit condition. IIRC GCC is a bit better about that.) And there's the general "more code is bad" (all else being equal) things like instruction cache size limits (which are hard to benchmark, because a microbenchmark will often fit in that cache).
Of course, whether this matters depends greatly on just how much code there is inside the loop. If you're json parsing a file in the loop, any difference will be completely invisible.
const VEC_SIZE: usize = 10000000;
fn use_for(vec: &mut Vec<usize>) {
for i in 0..vec.len() {
vec[i] = i;
}
}
pub fn main() {
let mut vec = vec![0; VEC_SIZE];
use_for(&mut vec);
}
while: https://godbolt.org/z/o5v1Mf
const VEC_SIZE: usize = 10000000;
fn use_while(vec: &mut Vec<usize>) {
let mut i = 0;
let len = vec.len();
while i < len {
vec[i] = i;
i += 1;
}
}
pub fn main() {
let mut vec = vec![0; VEC_SIZE];
use_while(&mut vec);
}