New winit with wgpu

How do I get winit to work with wgpu? The new winit uses Application implementation which no function of it can be async.

impl ApplicationHandler for App {
    fn resumed(&mut self, event_loop: &winit::event_loop::ActiveEventLoop) {
        self.w = Some(event_loop.create_window(Window::default_attributes()).unwrap());
        self.e = Some(Engine::new(&self.w.unwrap()).await);
    }

You can use pollster to synchronously block wgpu futures on desktop, but I don't think that works for wasm?

wdym by that? How do I do that?

You install the library I linked and use its API as documented on the futures you're getting from wgpu, instead of awaiting?

Eg

use pollster::FuturesExt as _;

...

instance.request_adapter(options).block_on()?;

Was there a more specific question?

Well, borrowed data escapes here.

self.e = Some(Engine::new(self.w.as_ref().unwrap()));

This is actually a separate and much easier problem. What you can do is, put the Window in an Arc and give a clone of that Arc to the Surface construction. Then, you will have a Surface<'static> and no unsatisfiable lifetime.

let window = Arc::new(event_loop.create_window(Window::default_attributes()).unwrap());
self.w = Some(w.clone());
self.e = Some(Engine::new(w).await);

(There's even one less unwrap() this way!)

How do I use an inner_size_writer to get the innersize?

That doesn't seem necessary. You can always ask the Window for its inner size.

What about if the event is scalefactorchanged? Same thing?

Sure. The only reason inner_size_writer exists is because changing the size at other times is asynchronous, but that's a consideration for writing, not reading.

My own code for handling these events looks like this (and has since well before winit 0.30):

WindowEvent::Resized(physical_size) => {
    dsession.viewport_cell.set(physical_size_to_viewport(
        dsession.window.window.scale_factor(),
        physical_size,
    ));
}
WindowEvent::ScaleFactorChanged {
    scale_factor,
    inner_size_writer: _,
} => dsession.viewport_cell.set(physical_size_to_viewport(
    scale_factor,
    dsession.window.window.inner_size(),
)),

(dsession.viewport_cell is the data that my renderer consults to know what size of image to render next time. and physical_size_to_viewport returns a Viewport type I define that contains both size and scale.)

It's a little silly-looking because each of the two events provides only half the needed data, but it works. It'd also work to handle both events with identical code and just consult the window for both properties instead of using the event data.

Do you multiply the size by the scalefactor?

Have you looked at the winit::dpi documentation?

The physical size is the correct size for setting surface/framebuffer dimensions — anything to do with generating pixels to be displayed. The scale factor describes how to adjust the size (in pixels) of objects to give them a (somewhat) consistent on-screen size independent of pixel resolution. The logical size is the physical size divided by the scale factor. So, whether you should multiply or divide depends on what you are calculating.

Is it just a choice to do stuff like this or is it wisdom? Because for some reason wgpu tutorial says to just resize to physical size for the resize event, and inner_size for the scalefactorchange event

Inner size is reported as a physical size.

  • Inner vs. outer size is what part of the window you're measuring.
  • Physical vs. logical size is what units you're measuring the length in.
2 Likes

I forgot, but what's the use of a unit of measurement in an engine?

Please read the winit::dpi documentation. It explains the two systems.

oooooh. Would this be good?

WindowEvent::Resized(mut phsize) => {
                let size = self.w.as_ref().unwrap().scale_factor();
                phsize.width = (phsize.width as f64/size) as u32;
                phsize.height = (phsize.height as f64/size) as u32;
                self.e.as_mut().unwrap().resize(phsize);
            },
            WindowEvent::ScaleFactorChanged { scale_factor, inner_size_writer } => {
                let mut size = self.w.as_ref().unwrap().inner_size();
                size.width = (size.width as f64/scale_factor) as u32;
                size.height = (size.height as f64/scale_factor) as u32;
                self.e.as_mut().unwrap().resize(size);
            }

Probably not. As I said before, the physical size is the correct size for setting surface/framebuffer dimensions — for deciding how many pixels you need to render. You need that information for redrawing, so you want to pass that information unaltered to your renderer. You might also want to include the logical size or the scale factor, depending on what you are doing, but you definitely need to pass the physical size.

Also, as a separate issue, dividing a PhysicalSize's fields in-place by the scale factor is a semantic type error. If you want that result, you should be calling PhysicalSize::to_logical() which does the math for you and returns a LogicalSize, the correct type for that value. The point of keeping the types separate is to keep clear which size you have, and avoid bugs resulting from misusing one as the other.

2 Likes

At what points would I need scale factor or logical size (one or the other)? Is it a thing taught with wgpu? Because i'm getting a VkSwapchainPresentScalingCreateInfoEXT error when buffering a physical size for both windowevents. given physical size for resize, and inner_size for scalechange.

At what points would I need scale factor or logical size (one or the other)?

When deciding how big to draw objects within the window. That's the only time logical size matters.

given physical size for resize, and inner_size for scalechange.

These are not different things. The size given to you by Resized is the inner size of the window.

1 Like