Resource source format (Bgra8Unorm) must match the resolve destination format (Bgra8UnormSrgb)

[2023-04-11T22:39:26Z ERROR wgpu::backend::direct] Handling wgpu errors as fatal by default
thread 'main' panicked at 'wgpu error: Validation Error

Caused by:
    In a RenderPass
      note: encoder = `<CommandBuffer-(0, 2, Vulkan)>`
    In a pass parameter
      note: command buffer = `<CommandBuffer-(0, 2, Vulkan)>`
    resource source format (Bgra8Unorm) must match the resolve destination format (Bgra8UnormSrgb)

I have triple cheked. My code does not contain any Bgra8UnormSrgb, though it does contain 1 Bgra8Unorm of the form

    device.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
        label: None,
        layout: Some(&pipeline_layout),
        vertex: VertexState {
            module: &vertex_shader_module,
            entry_point: "main",
            buffers: &[vertex_buffer_descriptor],},
        primitive: PrimitiveState {
            topology: primitive_topology,
            strip_index_format: None,
            front_face: Default::default(),
            cull_mode: None,
            unclipped_depth: false,
            polygon_mode: Default::default(),
            conservative: false,},
        depth_stencil: Some(DepthStencilState {
            format: crate::window::DEPTH_FORMAT,
            depth_write_enabled: true,
            depth_compare: CompareFunction::Less,
            stencil: StencilState {
                front: wgpu::StencilFaceState::IGNORE,
                back: wgpu::StencilFaceState::IGNORE,
                read_mask: 0,
                write_mask: 0,},
            bias: Default::default(),}),
        multisample: MultisampleState {
            count: crate::window::SAMPLE_COUNT,
            mask: !0,
            alpha_to_coverage_enabled: false,},
        fragment: Some(FragmentState {
            module: &fragment_shader_module,
            entry_point: "main",
            targets: &[Some(ColorTargetState {
                format: TextureFormat::Bgra8Unorm,
                blend: Some(BlendState {
                    color: BlendComponent {
                        src_factor: wgpu::BlendFactor::SrcAlpha,
                        dst_factor: wgpu::BlendFactor::OneMinusSrcAlpha,
                        operation: wgpu::BlendOperation::Add,},
                    alpha: BlendComponent {
                        src_factor: wgpu::BlendFactor::One,
                        dst_factor: wgpu::BlendFactor::OneMinusSrcAlpha,
                        operation: wgpu::BlendOperation::Add,},}),
                write_mask: wgpu::ColorWrites::ALL,})],}),
        multiview: None,})}


  1. Any hints on how to debug this ?

  2. What is Srgb, and what is triggering this ?


sRGB is a color space, kind of a way to map color values to an "actual" color

I would guess the rendering target here is configured for Bgra8UnormSrgb since sRGB is sort of the standard color space. The fragment shader format being specified as Bgra8Unorm may be causing the mismatch if that's the case.

I'm still a bit confused on Bgra8UnormSrgb.

From WebGL / OpenGL, I understand and use rgba8 all the time. 4 bytes, one for each of r, g, b, a.

I also get what Bgra8 is. Same thing different order.

I still don't understand what you mean by

What is the difference between Bgra8UnormSrgb vs plain old Rgba8 ?

The way the value is interpreted. It's kind of like how you can interpret a isize as a usize. They have the same number of bits but the way the bits are interpreted as a number is different, so they get treated as separate types.


I'm not sure how this is happening, but once again, you helped me resolve this bug. For someone self-claiming to not be an wgpu expert, you have an amazing knack for pinpointing the bug.

Thanks again!

In computer graphics there are two ways to represent brightness of pixels:

  • as an amount of light, which makes sense a physical property, and is needed for accurate rendering of lights and color blending. This is often called "linear RGB" or "linear light" color space.

  • or as a model of subjectively perceived brightness, proportional to how humans feel about it. Human perception is weird and non-linear. The most common standard for this is sRGB, and it's almost always what people think "RGB" is, when they don't have a specific color space in mind. sRGB is good for storing colors in 8 bits, so it's usually what you use for final output to the screen.

There's a formula for conversion from sRGB to linear light and vice versa. It's similar to raising pixel values to power of 2.2 (called gamma). So you can't mix sRGB and linear-light buffers. They need to either match, or be converted.


This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.