wgpu is an implementation of WebGPU for use outside of a browser and as backend for firefox's WebGPU implementation. WebGPU allows for more efficient usage of modern GPU's than WebGL.
As far as I know none of the browsers have it enabled by default yet. Firefox, Chrome and Safari do support it behind a feature flag you can enable in about:config, chrome://flags and the preferences respectively. Once you enable it, you will need to write code for the WebGPU api instead of WebGL api to use it.
I think there might be plans to be able to use gfx-rs/wgpu to write WebGPU code, then generate wasm code for the browser, and then that can use WebGL as a rendering layer?
I think it's called "Downlevel" support? Doesn't seem workable/stable yet though.
The XY problem motivating me here is: suppose a client has a Nvidia GTX 3090 Ti Founder Godzilla Edition GPU, and they want to grant me permission to use it. Can I (via wasm/chrome) get something more powerful than WebGL 2.0?
I would say that yes, once it's "ready" (the spec is still in draft status) and the browsers have implemented the APIs and so on, it will definitely be more powerful than WebGL 2.0.
It's going to reduce the CPU overhead by a lot, and you'll be able to do GPU Compute using WebGPU.
I don't know what you want to use the extra power for, but WebGPU should be able to use the GPU and CPU more efficiently than WebGL.
I believe WebGL 2.0 gives you: fragment shaders, vertex shaders, and vertex "Transform Feedback" for GPGPU. Assuming security issues can be taken care of, I would like the flexibility of CUDA -- in Chrome/wasm.
You can think of webgpu as like webgl 3.0.. better API, more features, better alignment with modern graphics cards, etc.
Similar to webassembly, it turns out that by starting fresh and thinking from the ground up (taking notes from metal, vulkan, etc.) the needs of the web are pretty similar to desktop needs for many products- willing to sacrifice a teensy bit of performance and low-level niche stuff to gain still very high performance and quite low-level "write once run everywhere.".. and I get the feeling like this isn't totally an after thought, they had it in mind from early days.
So I don't think it will be uncommon to see webgpu usage outside of the web with native libraries. It'll probably be preferred over opengl eventually (but maybe not preferred if someone wants to get down and dirty with vulkan)
In terms of webgpu vs. webgl it isn't like comparing JS to Wasm though.. webgpu isn't inherently faster than webgl, it depends on how you architect your code and if you use the new features.
In other words "hello world" like drawing a single triangle on the screen is probably not that big of a difference, if you use instancing and aren't changing a bunch of data each tick you may even get the same performance across all of it for something as heavy as rendering grass or a tiled map of some sort
Not sure if a 2d spritesheet game will benefit so much.
But of course for complex things, real 3d scenes, and most games/simulations- webgpu is designed so you can take advantage of less state changes, less draw calls, more asynchronous data processing, etc. This can all make a very big difference, or so I hear