I'm doing some evil pointer math, and the following is stumping me:
use std::ptr;
fn main() {
let x: Vec<u8> = vec![1, 2, 3, 4, 5, 6];
let m = &x[2..4];
let n = &x[2..5];
let mp: *const [u8] = m;
let np: *const [u8] = &n[0..2];
assert!(ptr::eq(m, &n[0..2]));
assert!(ptr::eq(mp, np));
let np2: *const [u8] = n;
assert!(!ptr::eq(m, n));
assert!(!ptr::eq(mp, np2));
assert!(format!("{:?}", mp) == format!("{:?}", np2));
}
m
and n
are both views on [1, 2, 3, 4, 5, 6]
, m
is [3, 4]
, n
is [3, 4, 5]
.
When I take the raw pointers of m
and &n[0..2]
, they're considered equal. This makes sense, as they both index the underlying vector at the same place.
But when I take the raw pointer of just n
, which starts at the same place as &n[0..2]
, it's considered unequal! This is despite the fact that, as the final assert!
shows, the Debug
representations of mp
and np2
are equal.
What am I missing? How are the raw pointers seemingly encoding the length of the slice they originally were constructed from (in addition to the offset)?