I need to convert a &Vec<u8> to u128. The use case is converting IPs (v4 or v6) that are encoded as a byte vector of length 4 or 16 respectively.
I came up with the following solution, but I guess its a bad one as I may loose portability to systems with a different endianness. Is there a safer solution?
pub(crate) fn ip_to_u128(input: &Vec<u8>) -> u128 {
let mut result: u128 = 0;
for b in input {
result = (result << 8) | *b as u128;
}
result
}
Any IPv4 address can be encoded in an IPv6 address: first 10 bytes are 0x00 followed by two bytes 0xFFFF. The remaining four bytes are the IPv4 address. There is a standard on this but admittedly I would have to look up the RFC. I'm omitting the 0xFFFF encoding above, but it is trivial to add once you have the u128.
fn ip_to_u128(input: &[u8]) -> u128 {
// If the input is always exactly the right length you can remove the next line.
let input = &input[..16];
let ip_bytes: &[u8; 16] = input.try_into().unwrap();
u128::from_le_bytes(*ip_bytes)
}
Great solution, but what if input has len 4 for IPv4 and len 16 for IPv6? In this case the code fails and I am not sure how to pad input without modifying it.