Stabilization of the allocator trait may make it much easier to write performant applications that need specialized allocators for different purposes. Now they can only rewrite the standard library to use a special allocator, but all structs from std support using any custom allocator. For now there are only two ways: use nightly Rust (not a solution for libraries) or fork an existing struct, remove all nightly featured code expect of the allocator feature and then add custom allocator trait support.
At least to me the killer app for this would be debugging generic code without having to sprinkle Debug bounds everywhere, maybe one of the primary cases where parametricity just isn't worth it. But of course also optimizations like dispatching to a faster implementation if some trait is implemented (as quinedot said, as a form of specialization). And because there's actually no dynamic dispatch involved, there's no performance penalty.
I must confess that, despite some C++ experience in the past, recent versions of the language are to read for me and look complicated. On the other hand, I've seen C++ programmers commenting that Rust was hard to read in comparison.
Maybe it's partly due to past choices, but I haven't followed its evolution the last 10-20 years closely enough to have a reliable opinion.
I don't think you can compare the situation of Rust and C++, though. I'm not convinced that standards and joint technical committees are the best way of designing or steering a language. Look at what happened to ALGOL 60. Even when it starts after the language's initial design, members of those technical committees are often academics or companies that sometimes try to push features for their own advantage or fame, and sometimes unhealthy compromises must be made so that everyone is satisfied and the new standard can finally be signed off. I hope Rust will never have to suffer from a similar process or from the influence of big companies—there is some amount of influence, but it doesn't seem negative so far.
So I think the evolution of a language is a difficult act of balancing innovation with steadiness, while avoiding to make the whole system seize up or to create factions.
I've personally been getting radicalized against macros, to the point where I have now arrived to a controversial position that declmacros2 should not get stabilized, and that the macro keyword should be repurporsed for a more comptime-style reflection-based metaprogramming system.
Care must be taken to make the features flow coherently together. C++ doesn't really do that, they just hack stuff together. Rust may also be on that path, with warts like async fn, special syntax for Fn* traits, and return type notation.
But you still need to check the instance type in runtime, so:
- one more memory access
- no inlining possible
Does it really differ from dynamic dispatch in terms of performance?
The check will be eliminated since it will be a check against a constant, then any dynamic dispatch will be eliminated (again, it's basically const-prop). So overall it turns into a normal function call. Which can be inlined, and readily optimized
Specialization.
Ok, I got it. thanks. I thought this function accepts any reference, and then checks for the underlying type, but if this macros really accepts closed set of rererences {Trait, SubTrait} and checks it at compile time, then it is, indeed, cool.
Yes, and other standards like CORBA or I would claim JavaScript. I mean JS never needed a "class" keyword.
I feel we do need some kind of standard at some point. Not necessarily an ISO standard. After all we want the up and coming GCC Rust to be fully compatible with what we have now. And perhaps others in the future. And things will naturally change over time.
What we need is. for all such future enhancements to be guided (restricted) by some underlying principles. Some kind of statements about what the language is supposed to be and what it is not.
C++ sort of had that. Unfortunately the overriding principles were compatibly with it's C origins and permanent compatibility with whatever existed in the past. That meant C++ has to add features in ever more twisted and contorted ways.
C on the other hand has kept its head. Seems to a have drawn a circle around what it should be and changed very little as a consequence.
If you haven't read it yet, you should be interested in Mara's post on the matter. It's been posted 3 years ago, but it's still relevant.
Maybe something to add to the 2026 wishlist, together with the wish that extra people can help on that.
Oh I remember Mara's post. A great analysis.
Back in my C days I was a die hard standards guy. My reasoning being that I work in industry, industry needs standards, ego I don't want to start relying on things that don't have a decent recognised standard. By analogy with standards for threads on nuts and bolts and so on. So ANSI C good, your language of the week bad.
But then after years of watch the ongoing train wreck that is the C++ standards process, and seeing what happened to CORBA and others I started to doubt the value of this standards idea.
Then came Rust. "Oh sod it, this is so good who cares about standards" I thought. Which of course is how C got started in the first place! Despite the existing standards for ALGOL, CORAL etc.
One way to look at it is that standards are supposed to document best practices in use. Not dictate them.
JavaScript (ECMA-262) source code input => idiomatic Rust source code output.
Not really a "wish". Just think it would be interesting. Should be possible: Bytecode Alliance's Javy relies on QuickJS (Rust crate) to convert JavaScript input to WASM output.
Why: Rust syntax, for me, is non-trivial to just figure out what's going on.
Javy runs the QuickJS runtime as WebAssembly, allowing you to execute JavaScript on said runtime. It doesn't transpile JS to Rust.
It doesn't transpile JS to Rust.
Yeah. Thus the "wish". Though personally I don't entertain wishes, or hopes.
The "wish",as it were, for me, is more about somehow getting a roadmap of the arcane Rust syntax...
try_as_dyn takes the argument as plain normal &T. There cannot be any runtime dynamic underlying type; the concrete type of any T it's invoked for is known to the compiler at monomorphization time. What the function does is check (at compile time) if T implements Trait, and if so, coerces the argument to dyn Trait. No runtime checking needed.
Of course T could itself be some dyn OtherTrait… except that that doesn't compile due to the implicit T: Sized and dyn types are not Sized. So only concrete types are accepted.
I'm curious why that is. I understand the issues with decl macros well (I ought to, I use them a lot), and agree that when misused they lead both to arcane, unmaintainable code, as well as a poor end user experience. However, I feel they still have loads of applicability and usefulness, and I would like to see at least some of the biggest issues get resolved via decl macros v2.
Why does no one talk about asynchronous destructors?
I want to be able to use the ? operator in my async functions without worrying about allocated resources.
In Go there is defer, in Python there is try/finally, but in Rust we are forced to structure async code in a very specific and often complex way to achieve the same guarantees.
I like to keep my dependencies at current releases. I wish I could keep up with the churn.
Dynamic module loading at run time. Although it will make Rust looking more like Java.