Recently, I have developed a real-time screen capture program.
In C++, if I capture a window screen and save it, the result image is in RGB format.
I know that RGBQUAD uses BGR. However, in the Rust language, I need to swap those BGR pixels to RGB using code.
In C++, I didn't write any code for that RGB swap.
But in Rust, even though I'm using the same C++ code(Rust version), the resulting BMP image is in BGR format. Even when using a C++ DLL and passing that pixel data to Rust, the pixel data is in BGR format.
Just like C++, the Rust language has no idea what a pixel is, and there's nothing built-in for handling colors. Everything about pixel formats is up to what code you write, which libraries you choose, and how you interact with the system.
If you get different formats in C++ and Rust, then it's probably due to using different libraries, e.g. a C++ library may be swapping channels for you, or from interacting with the OS differently (is there a setting for that, does it depend on what graphics context or window you open?).
fn main() {
let mut width: c_int = 0;
let mut pixels: Vec<u8>;
let mut height: c_int = 0;
unsafe {
width = GetSystemMetrics(SM_CXSCREEN);
height = GetSystemMetrics(SM_CYSCREEN);
}
let pixel_ptr = unsafe { captureScreen(&mut width, &mut height) };
if pixel_ptr.is_null() {
println!("Failed to capture screen.");
} else {
let pixel_len = (width * height * 3) as usize;
let mut pixel_slice = unsafe { slice::from_raw_parts(pixel_ptr, pixel_len) };
let pixel_vec = pixel_slice.to_vec();
// in below code, if i want RGB then i should using BGR to RGB swap logic with using 'pixel_vec' variable.
let image_buffer = ImageBuffer::<Rgb<u8>, _>::from_raw(width as u32, height as u32, pixel_vec.clone()).unwrap();
//just for test
image_buffer.save("e:\\test.bmp").unwrap();
// jpeg_data is BGR
let jpeg_data = turbojpeg::compress_image(&image_buffer, 30, turbojpeg::Subsamp::Sub2x2).unwrap();
let _: Box<Vec<u8>> = Box::from(pixel_vec);
}
}
also, I saw many 'window screen capture' library that using BGR - RGB swap logic by internally in crates.io
If i change my first C code logic with Rust version(using winapi crates : https://crates.io/crates/winapi) then result BMP image is BGR format, too.
(Just using winapi and Image/turbojpeg crates in Rust)
[My Rust code snippet - using winapi]
let mut pixels: Vec<u8>;
let full_width;
let full_height;
unsafe {
let desktop_window = GetDesktopWindow();
let desktop_dc = GetDC(desktop_window);
full_width = GetSystemMetrics(SM_CXSCREEN);
full_height = GetSystemMetrics(SM_CYSCREEN);
//let mut start_time2 = Instant::now();
let bitmap_dc = CreateCompatibleDC(null_mut());
let bitmap = CreateCompatibleBitmap(desktop_dc, full_width, full_height);
let bi_size: u32 = std::mem::size_of::<BITMAPINFOHEADER>().try_into().unwrap();
SelectObject(bitmap_dc, bitmap as *mut _);
BitBlt(
bitmap_dc,
0,
0,
full_width,
full_height,
desktop_dc,
0,
0,
winapi::um::wingdi::SRCCOPY,
);
let mut bitmap_info_header: BITMAPINFO = std::mem::zeroed();
bitmap_info_header.bmiHeader.biSize = bi_size;
bitmap_info_header.bmiHeader.biWidth = full_width as i32;
bitmap_info_header.bmiHeader.biHeight = -full_height as i32;
bitmap_info_header.bmiHeader.biCompression = BI_RGB;
bitmap_info_header.bmiHeader.biPlanes = 1;
bitmap_info_header.bmiHeader.biBitCount = 24;
pixels = vec![0; (full_width * full_height * 3) as usize];
let result = GetDIBits(bitmap_dc, bitmap, 0, full_height as u32, pixels.as_mut_ptr() as *mut _, &mut bitmap_info_header, DIB_RGB_COLORS);
if result == 0 {
panic!("Failed to get bitmap pixel data");
}
DeleteDC(bitmap_dc);
ReleaseDC(desktop_window, desktop_dc);
DeleteObject(bitmap as *mut _);
}
// Create a DynamicImage from the pixel data
let image_buffer = ImageBuffer::<Rgba<u8>, _>::from_raw(full_width as u32, full_height as u32, pixels).unwrap();
let jpeg_data = turbojpeg::compress_image(&image_buffer, 100, turbojpeg::Subsamp::Sub2x2).unwrap();
jpeg_data variable has BGR pixel data.
I know what you are talking about.(I know that Windows DIB uses BGR internally, too)
However, I just wonder why i should using explicit pixel swap logic in 'Rust' to get RGB pixels. Or did I miss something?
Is it because Microsoft or other programs can quickly obtain RGB results internally without explicitly using RGB conversion logic due to them employing internal operations in the form of _byteswap*** or CPU bswap?
I wonder if there is a solution that doesn't require using explicit pixel swap logic to obtain RGB in Rust.
The biggest difference I see between your two versions is the image file output code. In C++, you're using raw file writes to save a BMP file. In Rust, on the other hand, you're using the turbojpeg crate.
Have you tried using std::File to write a BMP file the same way that you do it in C++, without routing through ImageBuffer and turbojpeg? I suspect that's where the behavior differences are being introduced.
Edit: After a little bit of research, it turns out that Windows' RGBQUAD and RGBTRIPLE types are both defined to be in BGR order, and these types are used for both the output of the screen capture and in the definition of the BMP file format.
So, what's actually happening is that the C++ code gets BGR-ordered data from Windows and then writes it into a BGR-ordered data file format without attempting to do any transformation along the way.
The Rust code is significantly more complicated under the hood, because you're compressing the image into JPEG format instead of simply dumping the screen buffer to disk. This requires actually interpreting the pixel colors, and the image processing crate you've chosen to use expects them in RGB order, which requires a conversion.
As to why it expects RGB instead of BGR, I can only speculate. It likely is written that way to ease the implementation, either because it was originally developed under a different OS which uses RGB natively or because most of the on-disk image formats it works with use RGB order.
I'm not particularly familiar with either C++ or Windows programming, but these sound to me very much like "explicit RGB conversion logic."
Also, all evidence I've seen from the GDI documentation shows that Windows natively uses BGR everywhere, but always calls it RGB— I find it hard to believe that Windows programs wanting actual RGB-ordered data don't have to ask for it specially in some way.