Recently I wrote this piece on how to achieve the behaviors of inheritance through alternative methods in Rust, and it occurred to me that it would be really useful to restrict all the variants of an enum with trait bounds and implicitly implement the traits for the enum at the same time, ensuring that all variants implement the methods in the traits and thereby allowing the calling of those methods without match expressions. Including a variant that does not implement the required traits would result in a compiler error. This would be a zero cost abstraction that does not change how enums perform at runtime.
For example, this boilerplate code could be omitted from my aforementioned piece if Rust had this feature:
await!() was a macro, and IMO we were better off with it. The current syntax that conflates waiting with field access is a bad design choice.
The whole async-await mechanism is possible to emulate using future combinators, but the language designers thought that making asynchrony look like straight procedural control flow is a great idea. I disagree with that, too, but it's at least a sizeable chunk of the language that will be universally applicable.
In comparison, the enum dispatch thing has much more limited use, and thus doesn't IMO warrant changing the core language.
(And it's not quite as easy as it sounds, because what traits such a type would implemented gets complicated quickly. Which is probably the biggest reason I don't think it'll happen any time soon, if ever.)
No. await! lied, and did things that a normal macro -- even a proc macro -- can't do.
And a note on my philosophy behind this: Algebraic sum types, in general, seem like they should have some way of specifying commonalities between the variants to avoid spelling it out for each variant. Typescript lets you do this; so long as a method exists on all of the types in a union type, it lets you call that method out of the box with no additional syntax. Rust would require a way way to specify what the commonalities are, and elsewhere in the language, this is done with trait bounds. It makes sense.
To illustrate the use case, Chris Biscardi at multiple points discusses choosing between a trait or an enum for sub-typing purposes in this video. This, to me, really feels like it ought to be a false dichotomy; you should not have to choose. One should define both traits for the common behavior, and enums to store the implementing types under a single banner. But then, marrying the two runs into the issue raised in my post here; there is not currently a way to bound the variants of an enum to be compliant with one or more traits.
The trait-object-in-an-enum is an anti-pattern, a code smell. If you want to allow an open set of arbitrary types anyway, then just use a trait object (or generics) in the first place.
If you wanted to specify the number of things in some list, you wouldn't do the following:
enum Count {
Zero,
One,
Two,
More(usize),
}
you would just use a usize in the first place. Similarly, if you need maximal flexibility and dynamic dispatch, you should switch to trait objects instead of restricting an arbitrary subset to an enum — you'll only incur additional overhead and complicate the code for no real benefit.
If your use case is "but I need to treat these 3 types specially", then you are holding it wrong. Explicitly switching on the dynamic type of values that could already perform dynamic calls was already a mistake in mainstream OO languages with inheritance. If you need "special" behavior, that should just be part of the implementation of the trait for those specific types (or maybe even another, more restricted trait).