Computer graphics, buffers

For graphics output I used SDL (version 2) so far. After applying SDL_RenderPresent the buffer is (must be considered to be) uninitialized again, but I would like to have a persistent buffer. Apparently this can be accomplished by using a texture as target buffer and then copying it into the buffer at the end.

Actually, it would be nice to have even more control though, e.g. to to dispense with SDL as a dependency if the image only needs to be written to a file.

For this reason for certain purposes I would like to use my own buffer and then copy it to the SDL renderer. My question is now how to do it efficiently. Can I choose RGB for pixels or is RGBA better because of alignment?

Then one has

struct Pixel {r: u8, g: u8, b: u8, a: u8}

or maybe better

struct Pixel {rgba: u32}

The data would have to be packed so that the buffer can be transmuted into an u8 buffer with length width*height*4.

The buffer would now be wrapped into an SDL_Surface. From that an SDL_Texture is created, which seems to be located in the GPU memory, so drawing it should be hardware accelerated. Thus the only slow operation is SDL_CreateTextureFromSurface, but it should be faster than calling SDL_SetRenderDrawColor and SDL_RenderDrawPoint repeatedly.

References:

Using a memcpy will probably be the fastest way to copy pixels (on the CPU). You can use e.g. slice::copy_from_slice() for this.

In general, 4x u8 vs 1x u32 will have the same performance characteristics with memcpy. A lower color depth like 16-bit RGB will be faster just because there will be less data to copy.

General discussion

After some pondering I came to the conclusion that it is better to avoid an additional intermediate buffer. The functions that are directly dependent on SDL are better outsourced to an extra module for the sake of encapsulation.

Furthermore, this leads to the question to what extend an API can be provided that is agnostic about hardware acceleration. For example, an additional conversion from byte arrays to texture objects needs to be provided for fast rendering of images and glyphs. But then, changing the color means that the GPU must be able to draw a given graymap in a specific color. Given, a hypothetical GPU does not support this, one must convert back an forth, which may be slow.

Ultimately, I encountered emerging problems with changing pixel density, which lead to the advent of full-grown vector graphics. This further increases the complexity, especially regarding the rendering of vector fonts.

Regarding the memory copy: SDL_CreateTextureFromSurface does that for us, from CPU to GPU, including format conversion.

Rust-specific

Nevertheless, I would like to know what must be done to make the following transmutations safe. Transmutation of plain old data enforces layout compatible structures. Are further declarations required here?

use std::mem::transmute;
use std::slice::from_raw_parts;

// RGBA, assuming little endian
struct Color(u32);

// 0xAABBGGRR, because notation is big endian
const BLUE: Color = Color(0x00ff0000);
const GREEN: Color = Color(0x0000ff00);

fn main() {
    let ca: &[Color] = &[BLUE,GREEN];
    let ua: &[u32] = unsafe{transmute::<&[Color],&[u32]>(ca)};
    let ba: &[u8] = unsafe{from_raw_parts(
        transmute::<*const Color,*const u8>(ca.as_ptr()),
        ca.len()*4
    )};
    println!("{:x?}",ua);
    println!("{:x?}",ba);
}

You need to add #[repr(C)] or #[repr(transparent)] to structs you transmute.

Be careful about alignment. You can convert [u32][u8], but it's unsafe to convert [u8][u32].

Pointer types can be cast with as, no need for transmute.