At a high level WebGL2 :: OpenGL = WebGPU :: Cuda right ?
If so, is there a way for WebGPU / wgpu to say to Chrome:
Forget vertex/fragment stages.
Give me the raw canvas, I'll render to it myself.
^-- Are there any examples of WebGPU doing the above ? If not, is this some inherit limitation of the API ?
WebGL2 is to OpenGL ES 3 what WebGPU is to Vulkan/DirectX 12/Metal. WebGPU is not a GPGPU framework like CUDA. It doesn't even support compute queues right now which as I understand means that any compute work will block all rendering on AMD gpu's and will be hard killed quite quickly to avoid locking up your system for too long. You can use compute shaders on the graphics queue to write to the framebuffer though if you want to render without rasterization, or use the old trick of drawing two triangles that together form a screen filling rectangle.
This is counter to my intuition, but consistent with my inability to find examples online.
So compute shaders = not much more "flexible" than drawing two rects to cover entire screen + do all the work in a fragment shader ?
What does "compute" here even offer us? [I've played with some wgpu tutorials, but they have all been of the form: here's how we do rendering with wgpu, I have yet to see what power / flexibility compute shaders enable.]
A compute shader can write to arbitrary parts of the given buffers/textures and thus handle multiple pixels ar once or communicate within a single workgroup, while a fragment shader can only write to the specific pixel for which it was invoked and not communicate at all. In addition for a compute shader you can specify how many invocations per workgroup (group of "threads" guaranteed to run in parallel) and how many workgroups to run, while for a fragment shader it runs exactly once per pixel. (or multiple times in case of MSAA) A quick search resulted in the following tutorial for compute shaders with wgpu-rs: Computing image filters with wgpu-rs This one could have been implemented using vertex/fragment shaders. For something more complex the Image Blur - WebGPU Samples sample is a good example I think. Each workgroup handles bluring a single block of pixels. It first blurs one direction, then uses a barrier to wait until the entire threadgroup has arrived at the barrier and finally blurs in the other direction without having to wait for the entire image to be blurred in one direction.
This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.