Collecting Requirements for Rust GUI Library

I agree but ageee here too:

"However, I'd be wary of shooting yourself in the foot with something like this. TypeScript or JavaScript allows for the direct usage of an enormous amount of libraries. Something different will create a niche language that has a catch-22 issue. I think TypeScript would be a happy medium between full Rust semantics and Wild Wild Web javscript."

And of course static and dynamic construction is critical. I say internationalization is too and by adoption or collaboration too.

Here's a thing. I don't see a possible world where TypeScript, electron, nodejs (ie libraries generally) are not ALSO implemented and integrated with Rust as crates and traits etc. TypeScript is declarative right?

So it reduces from a certain view. Not only are controls very well defined but also layouts. IE. Material design. Where you might choose strategies like you would hashing algorithms. Which implies a standards body.

I would add you are picking the common denominators really and doing a bad job of allowing extension (ie ActiceX or property sheets in Windows) is fatal.

Further, there is an argument for an internal / common or intermediate form (depending) that's rational if you gain significantly by that.

Regardless, efficiency always matters. Look at redering a control not counting it. So it depends again. Lifetimes are just as relevant. GC bad. Bad GC!

And really, every core feature of rust, be it types, options , results or ownership are not LESS relevant to controls.

Going the standards groups route helps avoid internal conflicts without blocking innovation.

Well. That's what Mom told me anyway.

I'm interested in what drives you to this conclusion. I've futzed around, and never been happy with fluent-style UI declaration. It seems too inflexible to change, every time I've tried it, but that may be a lack of me 'getting it'.
I've definitely been more happy with hot-reloadable, declarative markup (like XAML) bound to strongly-typed code. Faster iteration times and extensibility are two big wins I've seen there, but that's just my experience.
On the flip-side, I haven't been pleased with the rough edges with the many solutions that use HTML/CSS (either written by rust code, or by the programmer) as a way to provide this atop some web renderer as a way to avoid writing a new renderer - it brings too much baggage, and too many (if familiar) limitations.

Mainly, usage of multiple different XML-like UI languages over the years including XAML, XUL, Glade, HTML, and my own custom UI libraries made using XML/XSLT vs using things like Swing and SWT and 4GL types of languages like Progress. I've come to the conclusion that all of the XML UI languages are really just piss-poor LISPs that use angle brackets and double-quotes instead of parenthesis and single-quotes/back-tics. In other words, functional declarations, which fluent API's express perfectly.

Fluent APIs also have the advantage of excluding the so-called UI/UX "designers" that can't be bothered to actually learn anything about how a UI is actually used by people who have to get actual work done. Double-Win!

2 Likes

I thought that the current pragmatic consensus/conclusion was to go for Electron + Neon?:

https://gitlab.com/z0mbie42/rust_gui_ecosystem_overview#electron-neon

Nice thread!

Electron adds a few hundred MBs on disk and GBs in RAM (depending on the platform). I don't know whether that's a great solution for UI in general…

For my project, I looked into electron and had to dismiss it, because it doesn't play well with other controls in the same window. I'm going for CEF for that reason.

1 Like

$.02 - there are two very different ways of understanding "for the web"

  1. HTML/CSS: I'm playing around right now with having Rust/WASM send UI state as pure data and then all the rendering is done via lit-html (no vdom, the entire dom is re-rendered efficiently every tick or state change) ... I've only got a little proof-of-concept right now but it works fine. It would be nice to ditch lit-html and just do a similar thing in Rust via some sort of macro that worked like template literals. However - as much as I'd love to have this part done in Rust for performance reasons and type safety, it's really not so bad at all to keep it in JS. HTML is, after all, a markup language.

  2. WebGL/Canvas: Totally different ballgame and the underlying problems aren't web-specific, it's all pure coordinate calculations and things. There are lots of "gotchas" when trying to actually use it on the web (e.g. how do you load and get the size of images - better to use the web api), but the layout engine itself should be totally agnostic. My concern here is that unless the app is mostly taking place in this engine (like a line-of-business app that is mostly UI), then this layout engine would need to somehow connect with a different engine... could get hairy to have both of them fighting over control of the canvas... (non-issue if they are independent layers though, like a HUD on top of a game)

Your second option would also mean that you'd have to recreate every control. While this could work in theory, getting things like scrollbars, text fields and menus working just as well as native controls is very very hard and can take years on its own. Just look at attempts like Java Swing to look how this can go wrong.

Having something completely alien to the native system works in games, because they're fullscreen, a small world on their own and don't need features that people take for granted in desktop software like a clipboard, but doesn't work for productivity software.

2 Likes

This conclusion completely ignores the myriad of games that render their UI directly on the GPU in the same manner. Let alone practically every digital audio and video editor, 3D modeling tool, CAD software, photo editing and painting apps. It's difficult to find any example that a) doesn't have application-specific widgets beyond what the system GUI toolkit provides and b) doesn't just ship coordinates to the GPU to render these custom widgets.

When your application has interface demands that require going above and beyond "purely native", you basically end up doing the raw pixel-pushing that WebGL/canvas does. There are a lot of libraries out there that can help avoid reinventing the wheel in many cases, but it would be silly to completely dismiss as impractical.

1 Like

I even explicitly mentioned games, how can you call that ignoring it?

Specialized tools like 3D modelling applications are also different in that you're used to learning these specific interfaces for a while before you can use them. In addition to that, most complaints about blender for example are about its arcane and alien interface that doesn't fit into any system. That comes after they spent decades working on their UI controls.

So, it depends on your target market. If you're only writing software that doesn't need more than a bunch of buttons (like games), go ahead and just write your own GUI using gfx-hal or equivalent. If your target audience is fine with suffering through your UI, because they're forced to do that by their employer or because they absolutely have to get that work done, go ahead and spend a few years on writing your custom UI widgets. However, if you want to have a working UI that people can use without a two week introductory course and a support forum with full-time staff, you have to use native controls.

Just look at Unity3D. They're just now starting to transition from the first type of interface to the second type of interface I mentioned, and that's after I think 8 years of writing code for it.

1 Like

Don't forget absolutely every app built on Electron, WebView, and friends. They all have completely custom UI elements using a mishmash of HTML, CSS, and JavaScript. Often barely resembling a traditional GUI.

I'm no programmer of GUIs and certainly have no expertise in GUI libraries but as a user...

What's all this about "native" controls? Generally they are pretty ugly and often out of place in an application.

For example: I spend my day in VS Code. With it's nice dark theme, it's explorer view on the left, it opens up nice looking pages when I install extensions with buttons and the like.

But then, when I want to open a file, boom!, it shatters the tranquility by smashing me in the face with a god ugly Windows file selection dialog which looks totally out of place. Jarring and irritating. Not connected to VS Code at all The illusion is shattered.

Other example: Windows 10. There are so many GUI styles going on there it's nuts. Which one of those is "native"?

What would be cool is to use a web renderer. I imagine creating something like Electron but using Rust in place of Node.js for the "application" side of things.

Imagine that React done in Rust rather than Javascript. We could have RSX instead of JSX :slight_smile:

Which would then also be useful in creating actual web applications in Rust served up as WASM.

Got to be a lot simpler and more generally useful than trying to recreate Qt/GTk in Rust.

Not everything is about looks, though. File choice dialogs are a pretty nice example, since the "system native" (whatever that might mean here) dialog hopefully has my bookmarks ready and useable, which can be a huge useability boon.

1 Like

Chromium and thus CEF/Electron use native controls where appropriate (like scrollbars). As I said, it's quite easy to implement a button, and nearly every web page does that in some way, but nobody does custom scroll views (except restyling the existing scrollbars).

Actually, I have implemented my own scroll view on a web page, because the one provided by CSS just doesn't work for the use case. I spent about a month on that project, and it still sucks when interacting with trackpads.

Another project of mine uses the menus provided by material lite, and it still hurts every time I have to use them, because they're that bad (for example, they're part of the same container as the button to open them, and they expand the container's size even when hidden. They also can't extend beyond the browser window).

So, I'm not talking out of my ass there, this is a problem I've been struggling with for years.

I'm willing to concede that I used “native” as a shorthand there. Getting controls to behave smoothly and just as a user expects is hard and can take decades to get right. If you're willing to invest that time, they're just as good as the native ones to me. For example, blender's UI is getting there, even though it's nowhere near native on anything.

And yes, Windows 10 is a mess. They have multiple teams working with multiple UIs in parallel that don't talk to each other. This is one of the main reasons why Mac users are looking down on Windows.

However, when you're using the apps, they have at least two components: First, they're internally consistent within the same application. Second, you can feel that somebody spent a looong time on getting them to behave in a usable manner (which is quite fuzzy, I know — that's one of the reasons this is so hard to get right).

I have built a handful of applications starting with visual basic and Java Applets. I have been working on a React app for the past two years.

In my view, so much of a UI is dealing with state in ways that is truly unique to what a UI requires. I love working with types in Rust and Haskell; this said, I also really like working with React very much because we use it in conjunction with Redux. With the recent advent of hooks I believe the combined use of the technology provides the most elegant way of managing state in a UI context.

The framework does a great job of having us distinguishes when we are engaging:

  • pure-functions (input determines output for all of the possible inputs)
  • effects where the return value depends on some hidden/broader access to state
  • effects that mutate state (e.g., may not return anything :: input -> void)
    ... in addition to distinguishing when we are engaging a sync versus async effect.

Understanding the above has been critical to augmenting the intent in our designs and the consequences of our choices; concrete implementations of how to proceed in a more robust, streamlined manner (this often means understanding what edge cases we need to consider to ensure we are defining a complete function, i.e., handles the complete domain of inputs).

Unlike the backend code, in the frontend UI we often don't have control over the sequence of events... the user does (this is at least generally true). In my experience, ad-hoc code (code with lots of if else statements ) is not a sustainable way to manage chaos. The long-term approach involves managing chaos with order. This is where the react-redux pairing has taken the UI implementation to a whole new, sustainable level.

There is some boiler-plate to which I would comment: boiler-plate is not always a bad thing. In fact given what the real challenge is, managing chaos (user-directed changes in state and user-directed commands to compute), the boiler-plate is a welcome attribute. Hear me out, what boiler-plate exists, is more a welcome rhythm that clearly articulates our intent and understanding of how the user interaction impacts state in context of all that is going on in the app (a rhythm to manage chaos). An accountant uses a ledger to track money; entries are made in several locations (e.g., cash flow, income statement and balance sheet) in a repeatable, rote manner that ensures the balance sheet... balances. Said differently, communication by repetition is not always a bad thing.

All this to say, there is a lot to steal from what has made the react-redux combination so pleasant to work with. The framework augments my understanding of the task; it does not get in the way of accomplishing the task.

In summary

A successful UI framework may benefit by considering the following:

  1. UI rendering in a standard and flexible way is key; something like React components go a long way here
  2. UI introduces specific challenges in how to manage state - chaotic in that we don't have control, the user does. Redux forces order in how we define the UI state-machine:
    • think and manage information in a strict sequence (user event -> dispatch -> update state -> display/render state using child components with props that accomplishes read-only access to state)
    • be more thoughtful and articulate in how we are implementing our state machine over time and increasing complexity.
  3. Types are useful a la typescript but likely even more so with Rust.
  4. Embraces the benefits of functional programming:
    • declarative code to address chaotic UI state (imperative can become a slave to state that only exacerbates the chaos)
    • explicit use of pure versus other functions with effects (useful inventory, decomposition and compartmentalization of the chaos)
    • composable (re-use of code to address a wide range of chaotic states; express an infinite set of states with combinations or reusable code/components)
    • associative (be sequence independent to manage sequence dependent state machines)
  5. Carves out specific UI state management use-cases when to exploit the benefits of the OO and imperative rust attributes.

I hope this helps scope what the specification might want to include in a next generation UI library for Rust.

- E

Yeah this is where the Yew Rust library style is quite nice. It's state handling first and foremost. The GUI part could easily be something other than HTML.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.