Suppose you have a large byte buffer containing some sort of data packet and you're pulling fields out one at a time. The from_{endianness}_bytes methods on the primitive numeric types seem like they're made for this, but they take fixed-size array arguments -- reasonably enough, it wouldn't make sense to call e.g. u32::from_le_bytes with anything other than four bytes -- and going from a (sub)slice to a fixed-size array is awkward. I find myself writing a lot of helper functions like
use std::mem::size_of;
fn u32_le_at_offset(buf: &[u8], offset: usize) -> Option<u32> {
let field = buf.get(offset .. offset + size_of::<u32>())?;
Some(u32::from_le_bytes(field.try_into().unwrap()))
}
This works, and the compiler can optimize out the try_into and unwrap, but it's ugly. And it makes me feel like I must've missed something somewhere in std that would let me do it with less ceremony... Have I missed something?
The problem with that is there's no standard trait for all the types that implement from_{endian}_bytes... but there's no reason I can't just define my own extension trait either ...
This is good to know about but the use of le in my original post was just an example, the stuff i'm working on today is actually defined as native endian (it's a strictly in-memory, local-system IPC protocol).
Well, that whole set of things (also split_last_chunk_mut and such) are the things that were specifically added to handle all these "look, I want an array" cases. (And conveniently they're all const-stable now too, unlike .try_into().unwrap().) Some history in Public view of rust-lang | Zulip team chat if you're curious.
If you can't use them, then the .try_into().unwrap() is probably the right conversion step.