I am trying to read UTF16 encoded text from an &[u8]. Preferably this is done without unsafe type casting or extra allocations. I am not sure how to accomplish this. The following code does it completely manually (without handling surrogate pairs):
fn read_string(slice: &[u8], size: usize) -> Option<String> {
let mut ret = String::with_capacity(size);
for i in 0..size {
let c = read_u16(slice, i * 2);
ret.push(char::try_from(c as u32).ok()?);
}
Some(ret)
}
You should match surrogate pairs for non-BMP characters. And ideally you should parse BOM to determine it's UTF-16BE or UTF-16LE, but practically windows haven't supported BE machines so virtually no document is written in UTF-16BE so just make sure it's UTF-16LE would be enough.
Can you help me understand why it should be aligned just because it's UTF-16?
If, say, we read it from a file, what would happen if we passed a misaligned buffer to read()? Or if we read it from a socket, say, an HTTP body, what would happen if the UTF-8-encoded header were an odd number of bytes?
"Should" in the sense of "I'd certainly expect it to be" and "it's good practice". Not in the sense that "it probably is" or "it can't possibly be misaligned".
Of course you can't force any arbitrary byte buffer to be aligned to 2-byte boundaries. However, I was arguing that when producing a buffer intended to hold UTF-16 data, one should ensure that it indeed is, because its semantics and the probable use of its contents most likely require that, or at least work best if it is aligned.
It's not hard to do, either: in the worst case, by allocating one more byte than necessary, it's always possible to slice the resulting allocation so that its starting address is 2-aligned. However, most allocators already return 8 or even 16-byte-aligned buffers anyway.
I don't follow your argument about the UTF-8 encoded header with an odd length. A UTF-8 byte sequence is not a valid UTF-16 sequence of 16-bit integers. When one reinterprets b"xy" as a (single-element) sequence of 16-bit integers, one does not obtain the UTF-16 encoded representation of the string "xy".
I would expect that the primary reason to have a buffer of UTF-16 in a &[u8] would be IO, and I wouldn't expect a socket library or memory mapped file or etc to be properly aligned.
(If I knew it was going to be UTF-16, I'd probably have a buffer of &[u16].)
At the risk of stating the obvious (a habit), have you considered using the encoding_rs crate for this? It might be more optimal (one less copy?) and has a clear notion of UTF-16BE vs UTF-16LE, surrogate handling, and optional Byte-Order-Mark (BOM) sniffing.