The answer seems to be that it generally does not get rid of the unused parts of the 102400 bytes long array, but instead keeps the whole thing, unless it can optimize out the entire thing.
const LONG_ARRAY: [u8; 102400] = [0; 102400];
static SMALL_SLICE: &[u8] = LONG_ARRAY.split_at(5).0;
// using this doesn’t create the whole data: .zero 102400
#[cfg(version1)]
#[inline(never)]
fn print_slice(x: &[u8]) {
for i in x {
println!("{}", {*i});
}
}
// using this creates the whole data: .zero 102400
#[cfg(version2)]
#[inline(never)]
fn print_slice(x: &[u8]) {
for i in x {
println!("{}", *i);
}
}
#[unsafe(no_mangle)]
pub fn do_printing() {
print_slice(SMALL_SLICE);
}
basically anything that can reveil the memory address of (an instantiation of) the original [0; 102400] constant value (static-promoted as part of the SMALL_SLICE initialized) to “arbitrary/unknown code”, the optimization goes away.
In the code example above, something like
for i in x {
println!("{}", *i);
}
makes the println macro produce &*i pointing to the u8 still in its original place as part of the long array - and then pass it to the dyn Trait-based formatting infrastructure, so the optimized doesn’t know whether the address is observed, and conservatively prevents optimizing it out.
Adding { } would force a move, and with the u8 in a new location, all accesses to the original [0; 102400] are visible to the LLVM optimizer.