Yesterday I had to write a little piece of code that uses a byte buffer that can be "raw" or run-length encoded. I wanted to keep the rest of the code independent and I used a pattern that involves a "fake" allocation in the case of the raw buffer:
let data = if tip.compressed {
decode_rle(&abr.buffer, tip.data.start, w, h).unwrap()
} else {
Vec::new()
};
let data = if tip.compressed {
&data
}
else {
&abr.buffer[tip.data.clone()]
};
// Use data as &[u8] from here...
Obviously it work but is there a better way to do this? I don't particularly like the repeated if and the "fake" allocation of an unused vec.
Cow implements Deref, you can just use Cow<'_, [u8]> in place of [u8]:
fn foo(_: &[u8]) {
//...
}
let buffer: Cow<'_, [u8]> = Cow::Borrowed(b"...");
// this is the explicit syntax to invoke the deref operation:
foo(&*buffer);
// this shorthand is more commonly used, thanks to deref coercion:
foo(&buffer);
let data;
let data = if tip.compressed {
data = decode_rle(&abr.buffer, tip.data.start, w, h).unwrap();
&data
} else {
&abr.buffer[tip.data.clone()]
};
The compiler only has to reserve some space on the stack, three pointer sized values, which in this specific example likely exists anyway, and initialize them to zero.
For several reasons, such an operation is quite cheap on a CPU. Writes are inexpensive, occur in a hot region of memory like the stack, remain in cache only, are likely overwritten a few nanoseconds later, and are never written to RAM.
Practically, I would never worry about a Vec::new(). For most practical purposes, it is virtually free. And with some luck, the compiler will optimize it away completely anyway.