I can't speak to other platforms, but I can talk about how I'd approach this on Windows, which is likely to be similar on other platforms.
The parts you need are:
- Creating a transparent window for the overlay
- Drawing transparent content into said window (most apis will fill the entire background as a side effect!)
- Overriding the window hit-test to report the overlay as being entirely hit-transparent, so underlying windows get mouse events
- Hooking raw mouse events, to be able to still handle mouse events when your window is entirely hit-transparent.
- Dispatching said hooked window events into whatever widget/UI library is drawing your UI.
So maybe you can see that some of this depends on why you need an overlay, for example if you just need to track events over the whole screen, but only draw regular opaque controls that block the mouse like usual, then you don't actually need the transparent window at all, just the hook and a regular window!
In practice, actually implementing this is quite a bit tricky, using Rust or not. You might as well use Rust, since it's going to be a ton of fiddly code and most libraries out there (windowing, rendering or gui) will not work, so you might as well use a nice language.
If you want to draw to the overlay and still click through, you will need to use
WS_EX_LAYERED | WS_EX_TRANSPARENT - the former alone lets you create partially transparent windows "officially", and either set a color key to be transparent or use an alpha channel, depending on what you call
SetLayeredWindowAttributes() with. Either way, this lets opaque parts of the window (or partially transparent with alpha) receive mouse events, and everything else falls through to the window behind. With
WS_EX_TRANSPARENT as well, all mouse events fall through to the window behind, and you never get the event yourself!
However this only gets you a transparent window. You still need to receive mouse input and draw transparently to the window, ideally with a windowing library.
To hook mouse input even for other windows, you have two options:
WH_MOUSE_LL, which will interrupt all processes
GetMessage()'s that are about to receive mouse events, switch context to your app, call the hook, then return back to the other app. You should be very careful to process the event as quickly as possible: ideally post it to a queue to handle later.
usUsagePage: HID_USAGE_PAGE_GENERIC, usUsage: HID_USAGE_GENERIC_MOUSE, dwFlags: RIDEV_INPUTSINK, hwndTarget: hwnd, which will deliver
WM_INPUT messages to
hwnd that you will need to decode - this is great if you want to know exactly what happened, as the events are completely raw, but terrible if you want to know where the cursor is, as the events are completely raw. You get the location as either a delta from the last event, or a normalized 0..65535 x and y on the screen (e.g. for pen/touch input?). It also doesn't apply mouse acceleration. This is terrible for a UI trying to match the cursor, but it's great for game input!
To draw transparently to the window, in my experience, you have to use Direct3D or a layering API like Direct2D; GDI, OpenGL and Vulkan all can't write the alpha channel (fairly arbitrarily: they all understand alpha).
egui would in theory work, but you'd have to do something like this and replace the winit platform to adapt the backend to use your custom window: egui_example/main.rs at master · hasenbanck/egui_example · GitHub
I've got some of this going, but no actual rendered output just yet. Hopefully this gets you at least part way there for now, and I'll probably keep poking at this over the weekend.