Generalizing pixels made of a constant number of bytes

Is there a better way to represent pixels that are made of channels without resorting to the mess that I currently have? Trying to avoid allocations here, for obvious reasons. Parametrization by constant integers would be the best solution, and hopefully it becomes a thing soon, but here's what I have right now:

pub trait ByteChannels {
    type Channels: AsRef<[u8]>;
    fn into_channels(self) -> Self::Channels;
    fn from_channels<F: FnOnce(&mut [u8])>(f: F) -> Self;
    fn width() -> usize;
}

pub struct Bgra(pub u8, pub u8, pub u8, pub u8);

impl ByteChannels for Bgra {
    type Channels = [u8; 4];

    fn into_channels(self) -> Self::Channels {
        [self.0, self.1, self.2, self.3]
    }

    fn from_channels<F: FnOnce(&mut [u8])>(f: F) -> Self {
        let mut x = [0; 4];
        f(&mut x);
        Bgra(x[0], x[1], x[2], x[3])
    }

    fn width() -> usize {
        4
    }
}

Sorry, I know I ask a lot of questions here. (.-.)

I don't understand how from_channels is intended to be used. Why does it take a closure?

Also, why does the closure take a slice rather than a &mut Channels? The latter would seem more friendly to loop unrolling.

  1. Well, the idea is that the user is given a slice, which they then fill with the channels, which the implementation then uses. I couldn't think of a simpler way that didn't involve allocation.
  2. That doesn't seem to make a difference in the assembly.

I would think that the assembly would depend on what the closure is, which is why I'd lean towards the approach that explicitly provides the size information at compile time, rather than hope the compiler can figure out the size of the size, presumably due to inlining.

BTW, when you talk of avoiding allocation, are you referring to using the stack, or the heap? A far simpler function that would still avoid heap allocation would be:

fn from_channels(x: Self:: Channels) -> Self {
         Bgra(x[0], x[1], x[2], x[3])
    }

This also avoids initializing the array to zero unnecessarily.

2 Likes

Avoiding heap allocation, of course! :smile: But, using your signature of from_channels, how would I implement a function like:


fn get_first_pixel<T: ByteChannels>(bytes: Bytes) -> T {
    // Get the first `T::width()` bytes from `bytes` and use them to construct `T`.
}

That's the simple case - you could just take a slice as input. But what if the data is planar, or has some other wacky representation?

I see, yes, my idea wouldn't work for that, since you want to write a single function that works for any kind of pixel, even with different sizes. However, with a bit of tweaking, you could make it work. You just need a Default constraint, and to add AsMut along with AsRef.

pub trait ByteChannels {
    type Channels: AsRef<[u8]> + AsMut<[u8]> + Default;
    fn from(c: Self::Channels) -> Self;
}

impl ByteChannels for [u8;4] {
  type Channels = [u8; 4];
  fn from(c: Self::Channels) -> Self {
      c
  }
}

fn tester<T: ByteChannels>() -> T {
   let mut x = T::Channels::default();
   {
     let y = x.as_mut();
     println!("length is {}", y.len());
     y[0] = 1;
     y[2] = 4;
   }
   T::from(x)
}

fn main() {
   println!("{:?}", tester::<[u8;4]>());
}

Another idea would be to change to

fn from_channels(x: &[u8]) -> Self {
     Bgra(x[0], x[1], x[2], x[3])
}

This requires copying the pixel (if it is not currently in the right order), but it could be copied to the stack, so it would still not require heap allocation.

2 Likes

:smile: Thank you so much! I never would've thought that Default was the important trait!

2 Likes