I still do not understand what https://github.com/gfx-rs/wgpu provides
In particular, I have the following question:
Suppose my end users are running latest Chrome, so they have wasm and webgl enabled.
I am writing client side app in Rust compiled to wasm, using webgl.
Does webgpu give me any additional power over wasm/webgl ? If so, how do I enable it?
wgpu is an implementation of WebGPU for use outside of a browser and as backend for firefox's WebGPU implementation. WebGPU allows for more efficient usage of modern GPU's than WebGL.
As far as I know none of the browsers have it enabled by default yet. Firefox, Chrome and Safari do support it behind a feature flag you can enable in about:config, chrome://flags and the preferences respectively. Once you enable it, you will need to write code for the WebGPU api instead of WebGL api to use it.
I think there might be plans to be able to use
gfx-rs/wgpu to write WebGPU code, then generate wasm code for the browser, and then that can use WebGL as a rendering layer?
I think it's called "Downlevel" support? Doesn't seem workable/stable yet though.
At least that's what I have gathered?
EDIT: There's also this https://github.com/gfx-rs/gfx/pull/3661
The XY problem motivating me here is: suppose a client has a Nvidia GTX 3090 Ti Founder Godzilla Edition GPU, and they want to grant me permission to use it. Can I (via wasm/chrome) get something more powerful than WebGL 2.0?
I would say that yes, once it's "ready" (the spec is still in draft status) and the browsers have implemented the APIs and so on, it will definitely be more powerful than WebGL 2.0.
It's going to reduce the CPU overhead by a lot, and you'll be able to do GPU Compute using WebGPU.
I don't know what you want to use the extra power for, but WebGPU should be able to use the GPU and CPU more efficiently than WebGL.
I believe WebGL 2.0 gives you: fragment shaders, vertex shaders, and vertex "Transform Feedback" for GPGPU. Assuming security issues can be taken care of, I would like the flexibility of CUDA -- in Chrome/wasm.
I don't know enough to be able to say how good GPU Compute will be compared to CUDA. Maybe someone else does and can give you a better answer.
(take the following with a grain of salt)
You can think of webgpu as like webgl 3.0.. better API, more features, better alignment with modern graphics cards, etc.
Similar to webassembly, it turns out that by starting fresh and thinking from the ground up (taking notes from metal, vulkan, etc.) the needs of the web are pretty similar to desktop needs for many products- willing to sacrifice a teensy bit of performance and low-level niche stuff to gain still very high performance and quite low-level "write once run everywhere.".. and I get the feeling like this isn't totally an after thought, they had it in mind from early days.
So I don't think it will be uncommon to see webgpu usage outside of the web with native libraries. It'll probably be preferred over opengl eventually (but maybe not preferred if someone wants to get down and dirty with vulkan)
In terms of webgpu vs. webgl it isn't like comparing JS to Wasm though.. webgpu isn't inherently faster than webgl, it depends on how you architect your code and if you use the new features.
In other words "hello world" like drawing a single triangle on the screen is probably not that big of a difference, if you use instancing and aren't changing a bunch of data each tick you may even get the same performance across all of it for something as heavy as rendering grass or a tiled map of some sort
Not sure if a 2d spritesheet game will benefit so much.
But of course for complex things, real 3d scenes, and most games/simulations- webgpu is designed so you can take advantage of less state changes, less draw calls, more asynchronous data processing, etc. This can all make a very big difference, or so I hear
This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.