Even more general pixels


Currently I have a PlanarFrame type that is made of channels that are each one-byte long. But @kornel (correctly) pointed out that this restriction is really limiting, so here’s a replacement for the ByteChannels trait that seems to be more flexible (maybe too flexible?)

/// Indicates a pixel that is composed of several channels.
pub trait Channeled {
    /// The number of channels that the pixel is made of.
    fn channel_count() -> usize;
    /// Width of the nth channel, in bytes.
    fn channel_width(n: usize) -> usize;
    /// Write the nth channel at the start of the given slice.
    /// Should leave the rest of the slice unchanged.
    fn write_channel(&self, n: usize, to: &mut [u8]) -> usize;

/// Indicates that any byte sequence that is `mem::size_of::<T>()` bytes long
/// can be transmuted into a valid `T`.
pub unsafe trait AlwaysValid: Copy + Sized {}
  1. Is this too convoluted?
  2. Could this make performance worse? The library is meant to be used to read/write/combine video frames. The write_channel function will be called a lot, almost always in order (from 0 to channel_count).


Traits can be generic and have associated types, so instead of runtime-checking &mut [u8] it could be:

pub trait Image {
   type Pixel: Copy;
   fn write(pixel: Self::Pixel);

The AlwaysValid unsafe trait is weird. You could add method fn as_bytes() -> &[u8] and let the implementor of the trait to worry wether it’s safe or not.

But overall I’m not sure what this trait is for. The trait implies it’s a generic abstract thing and the underlying type can be anything, but you actually seem to want to read bytes out of it, presumably in some specific layout.

I’m guessing that for video you will end up with very codec-specific requirements regarding layout of the data in memory. For example to encode VP9 the image has to have specific layout:

So merely casting some abstract channel layout to array of bytes is unlikely to work. If you want to make it possible to build frames without copying of underlying pixels, you can’t use a vague generic type for them. You have to describe your requirements exactly to help users build appropriate data type.

Consider having a concrete struct like Image and a factory or builder pattern to create it from raw data, e.g. Image::from_planes(y: Plane<u8>, cb: Plane<u8>, cr: Plane<u8>), Image::from_interleaved(pixels: Plane<YUV>), etc. where Plane is a 2d bitmap with width and stride.


What do you mean by "runtime-checking &mut [u8]"?

Edit: On second thought, this trait you suggested sounds like exactly what I have right now but with bytes instead of channels, so I’ll just change that. Planar frames are more complicated.

Edit 2: Actually, your from_planes function looks like a great idea, with a few edits:

PlanarFrame::new<C>(w: usize, h: usize, planes: C)
    where C: AsRef<[Bytes]>;

PlanarFrame::new(width, height, [ ys, cbs, crs ])
    // where width, height: usize; ys, cbs, crs: Bytes

// Can be cast as a trait object as VideoFrame<Pixel = Yuv> for abstraction.

Is that usable enough?

Edit 3: After actually implementing it, I realized that the implementation would be greatly simplified if it was instead:

PlanarFrame::new<C>(w: usize, h: usize, planes: C)
    where C: AsRef<[(usize, Bytes)]>;

The extra usizes specify the total byte contribution of each plane. And it still doesn’t handle subsampling (other than the aforementioned situation where some planes have more bytes per subpixel than others, which isn’t really subsampling.)


The length of [u8] is unknown at compile time (it could be 1, or 3, or 0, or 10000), so every use of it (to[0]) will have a bounds check at run time. OTOH if you used generic type to: T, then size of that type would be known at compile time, and Rust would generate optimized version of code for every pixel size used.

PlanarFrame::new<C>(w: usize, h: usize, planes: C)
    where C: AsRef<[(usize, Bytes)]>;

You need width and stride for every plane. From that you can guess subsampling. I’d suggest avoiding using plain tuples, since it’s unclear from the type definition what that usize is for. Also due to orphan rules you’ll have problems implementing traits and methods on them. If you use your own struct for each plane you’ll have room to define stride.

To define subsampling you may need to have another enum arg or extra options. If you’re going to be converting between subsampled and non-subsampled formats you’ll also need to keep track of the chroma offset and filter type.