are equivalent, and the only difference is in warnings about unused values. However, now I noticed that
fn main() {
let x = std::ptr::null::<u8>();
let _ = *x;
}
compiles, while
fn main() {
let x = std::ptr::null::<u8>();
*x;
}
doesn’t (dereference of raw pointer is unsafe and requires unsafe function or block). Does anyone know if that’s intentional? Is it because there is a difference between the two versions in terms or run-time behavior?
Compiler folks are working on moving the unsafety checking to THIR, rather than MIR, so it'll be more consistent about stuff like this.
(MIR is better for flow-sensitive stuff -- that's why NLL happens there -- but for things that are based on syntactic blocks it's better in something that still has that structure.)
Yeah, I’ve seen that… however I’m still wondering why let _ = EXPR; and EXPR; generate different MIR in the first place. (They do, right? Otherwise an unsafety-check on MIR would return consistent results.) If let _ = EXPR; and EXPR; really are semantically equivalent, then generating less MIR for one of them seems like a potential missed optimization on the other. Sure LLVM can probably handle it most of the times, but still…
let _ = EXPR; is matching EXPR as a place, but then doesn't actually need to read anything from it to match the pattern, so there's no read in the MIR for unsafetyck to notice.
Whereas EXPR; is an expression statement, so it evaluates the expression -- which requires the read and thus unsafetyck sees it. It's like drop(EXPR);.