Current state of GUI development in Rust?

I think it was brought up several times, but is there any progress on layout solvers? (e.g. see yoga) For bonus points it would be nice to be able to have "compile-time layouts", i.e. you provide layout restriction at compile time and library will precompute how elements should react to resized window, instead of traversing the whole layout tree.

1 Like

Not an expert in the field, but Flutter's approach to layout seems very interesting: Flutter's Rendering Pipeline - YouTube

It's not really a layout algorithm in a sense that it is not some kind of mathematical constraint-solving framework, like cassowary. It's more like a programming API which you get if you use "single linear tree traversal" as your main design constraint. However, I see how things like flexbox or grid layout can be easily implemented on top of it.

2 Likes

Is possible to implement a GUI like Flutter in Rust? Because the framework is built with a lot of inheritance, and Rust does not support inheritance.

3 Likes

Rust is Turing complete, so it's possible to do anything in it, up to performance. The pertinent question is if you can do it efficiently.

1 Like

One alternative to the event-loop-and-callback-soup (aka retained-mode GUI) is the immediate-mode GUI. There are plenty of other resources describing the paradigm, and of course Rust bindings for dear-imgui exist. Here's a popular one: imgui.

1 Like

I'm not sure there's really as much difference between the two in ways that actually matter as people think. Also immediate-mode GUIs certainly still use event loops, quite transparently at that.

Apart from the obvious that there must be a loop unless your application exits immediately ... I was pointing out event loops in the sense of something like Windows where you call GetMessage and DispatchMessage and friends, and it does a bunch of black box things just to ultimately call back into your code for the UI element's window procedure. Or how about HTML, where even the loop is hidden away from the developer? Again, the loop will eventually call back into your code to handle interactions.

With an immediate mode GUI, there are no callbacks (unless you implement them for some reason, maybe a good reason at that?) You also implement the main loop. The major difference is that handling UI interactions takes the form of checking the return value of the function that displays the element. For example, fn do_button(text: &str, x: u32, y: u32) -> bool. If it returns true, the button is being clicked, therefore perform some action.

This approach has the downside that it introduces a frame of latency for every user interaction, and the only way to fix that is to run the entire render function again immediately when there is a user input.

I don't buy this explanation. The example do_button function can draw the button in the "pressed" state when it returns true. There is no 1-frame latency or redraw necessary.

Suppose there's some state x: i32, and if do_button returns true, the render function increments x. Any part of the UI that depends on x that is drawn earlier in the render function than the call to do_button will have been drawn using a stale value of x.

That sounds like a pretty standard state management/ordering concern. In a similar vein to rendering triangles with translucency last when a depth buffer is used. You have the same issue if you choose to draw the GUI last and expect that button to add an object to the scene.

I'm just pointing out an inherent limitation in the immediate-mode event handling pattern that retain-mode event handling is not subject to. In your example, if you keep layout information around after drawing the GUI, you can handle events before drawing either the scene or the GUI, which eliminates the frame of latency.

I'm aware that retained mode doesn't have the same considerations for ordering and state management. What I'm describing is that since the developer is drawing the GUI they should be aware of how state is flowing through it, and be able to order their draw calls appropriately. It may not always be the case that a developer is aware of how all state flows through the application, and more complex scenarios will make it more difficult to keep latency ideal.

My point is there is no inherent latency added by immediate mode GUI, contrary to what you wrote earlier:

Since changing the order of operations also avoids adding a frame of latency, then rendering twice is definitely not the only fix.

Reordering your operations is not always possible. Separating update and render into two phases is, and comprehensively fixes the problem.

I agree, with the caveat that it adds other problems. The concept of HMGUI was mentioned earlier in this thread as an attempt to address the problems with both IMGUI and RMGUI.

I am using:
(web-view(html5+js+css3))<-----> ( websocket and minihttp)<----->(rust host at background).

The advantages:
1.Html5+css3 is expressive. You can make a cool gui with canvas, css3 animation.
2,Data transports between rust and client is rapid by keeping three websocket connections at background.
3,The final single binary app is only 1.5Mb(use zip pack all resources into the rust source code then unzip in RAM when the app startup).
4,gui events is handle by js.
5,layout and debug ui by using web browser.
6,communicate with json between ws side and the host side

The disadvantages:
1,On windows7 only IE10+(include IE10) support websoket.

1 Like

For those still interested in this topic, please take a look at areweguiyet

12 Likes

Rust with gtk-rs, along with gstreamer-rs are working fine for me writing Linux GTK+/GStreamer desktop applications. D with GtkD is also very good though, and in some ways better, on some ways not so good.

2 Likes

I really find I need a GUI library for my purposes, and I have been watching every thread about ui development in RUST with great anticipation.

I have tinkered with gtk-rs, writing a custom browser which interfaces with a restful backend for an asset management prototype. I definitely feel that it worked pretty well. Most of my problems had to do with issues with gtk: Lack of documentation for gtk3 for doing more advanced things, missing functionality with table views, really weird limitations with tree views, etc...

I think that the authors are doing a very good job, and it is the most mature of the frameworks that i have played with, although, I have encountered some performance issues with resizing windows.

I have also played with Azul, and I enjoyed the api, but it had definite issues with laying out components, non-unform scaling, etc.

I really like the feel of Cursive (tui), which relies on channels to handle events. But, its a tui not a GUI.

I also liked Conrod, although I was a bit confused by the lack of a fully abstracted back end, as well as very limited default primitives ( no shadows, single border color, no ramps, etc. It seems like it would be a lot of work to customize the widgets to make them look professional). It appears to be written with a very particular ui style in mind, as evidenced by the main author's ui work. Not really styled for the desktop.

I ultimately miss Qt widgets though. A lot of effort has gone into Qt and many graphics intensive applications are written using it (maya, houdini, nuke, katana,etc.)

7 Likes

What problems did you have with Azul specifically? Layout is still a bit problematic in regards to layouting text, otherwise it should technically work. What do you mean by "non-uniform scaling" - do you mean the HiDPI factor?