Rust isn't C++, and that's okay

(This post got far too long and in depth, so I pulled it out into its own thread.)

The direct equivalent of the C++ would be

// this doesn't quite work as-is right now,
// but it is close to working via multiple potential avenues.

pub trait Midpoint {
    fn midpoint(a: Self, b: Self) -> Self;

pub fn midpoint<T: Midpoint>(a: T, b: T) -> Self {
    <T as Midpoint>::midpoint(a, b)

impl<Tp> Midpoint for Tp
    Tp: Integral,
    fn midpoint(a: Tp, b: Tp) -> Tp {
        const BITSHIFT: u32 = Tp::BITS - 1;

        let diff = a.wrapping_sub(b);
        let sign_bit = Tp::try_from((b < a) as u8).unwrap();

        let half_diff = (diff / (Tp::ONE + Tp::ONE))
            + (sign_bit << BITSHIFT)
            + (sign_bit & diff);


impl<Fp> Midpoint for Fp
    Fp: Floating,
    fn midpoint(a: Fp, b: Fp) -> Fp {
        let lo = Fp::MIN * Fp::from(2_i8);
        let hi = Fp::MAX / Fp::from(2_i8);
        if a.abs() <= hi && b.abs() <= hi {
            // typical case: overflow is impossible
            (a + b) / Fp::from(2_i8) // always correctly rounded
        } else if a.abs() < lo {
            a + (b / Fp::from(2_i8)) // not safe to halve a
        } else if b.abs() < lo {
            (a / Fp::from(2_i8)) + b // not safe to halve b
        } else {
            (a / Fp::from(2_i8)) + (b / Fp::from(2_i8))

(I'm using the funty traits[1] here. Using num-traits would be an incorrect port, as the C++ midpoint is only provided for primitive arithmetic types other than bool. Also, I'm only providing the arithmetic overload, not the pointer overload; since Rust doesn't have function overloading, that'd be a separate (and unsafe) function.)

The problem with this implementation is that we don't define what should happen if there is some T: Integral + Floating. C++ just doesn't care and is happy to defer any errors until you happen to provide such a type. Rust does care and wants you to resolve this ambiguity up front. The current limitation of Rust's traits is that there's currently no way to resolve this overlap without falling back to listing the closed set of non-overlapping impls explicitly. There are, however, multiple semi-planned future extensions which would permit this overlap to be resolved[2]. Resolving the overlap for actually open traits is a difficult problem (see: the problems with specialization) and one which C++ can barely be said to provide a solution for[3] despite not having the soundness issue in specialization Rust has since it doesn't have type-level lifetimes.

But to repeat what's been said, C++ templates are closer to Rust macros than Rust generics. There's some extra magic such that C++ templates are automatically and only instantiated for types on an as needed basis, but by writing a Rust macro which stamps out a trait implementation for multiple types you're doing exactly what C++ templates do, not "cheating" in any way. When there's a closed set of implementing types, preinstantiating the templates can even be preferable; C++ has explicit template instantiation for this exact reason.

Don't think of rust macros, when used to generate trait implementations, as a more powerful/principled version of preprocessor macros; think of them as a more powerful version of extern template. Because in this role, that's exactly the function they're serving.

If you want to be pedantic about it, Rust's function generics aren't even really attempting to be an alternative to C++ function templates; they're much closer in design to the use of pure virtual (interface) inheritance, except beefed up and able to work with instances by value without slicing by virtue of being monomorphic instead of polymorphic. C++ function templates are much more analogous to Rust's function macros, except for two main limitations[4] (no access to argument types by name and no automatic deduplication of same-type instantiation).

Rust's macros serve multiple purposes with one (perhaps overly) general functionality, just like C++'s templates do. They're significantly powerful, and avoiding them definitely should be preferred when practical, but they're still a first-class feature that can and should be utilized to great effect. That a Rust macro is used isn't immediately an indication of a failure of the rest of the language to offer a reasonable solution, nor of the author for not finding a "better" approach[5]; sometimes a simple syntactical templating of some code is exactly the solution the problem space is asking for.

To tie this (strenuously) back to the original topic (Rust having strong "if it compiles, it probably works as intended" vibes), this is a primary component[6] of why Rust has this feeling more than C++ does.

Needless pettiness and pedantry

This gets into the two axis of type safety; static versus dynamic typing, and strict versus weak typing. Both Rust and C++ are statically typed languages[7]. However, C++ is generally[8] more weakly typed, and Rust is generally[9] more strictly typed.

C++'s most understood source of "weak" typing is the availability[10] of implicit type conversions/coercions. But perhaps even more importantly, C++ templates are at most weakly statically typed.

The most "fun" issue in C++ code I ever had to deal with[11] was due to templates' weak typing, where what was supposed to be a template specialization wasn't actually getting used and instead was falling back to a SFINAE-style fallback implementation because of a trivial but hard-to-spot mistake making template substitution always fail.

C++ templates are still statically typed, since they will prevent compilation from succeeding if a substitution creates a type/compilation error (but not for substitution failure, of course), but they're weakly typed (at least by my categorization; strong definitions are highly contentious) because they're not possible to check until resolving a fully concrete instantiation[12].

Furthermore, the dialect of C++ where you use auto (almost) everywhere is effectively just a bad dynamically typed language. You're giving up the actually nice conveniences that dynamically typed languages are able to provide in order to get statically typed AOT optimizations. In exchange, you have to deal with the compiler providing obtuse error backtraces that describe how something went wrong (some compile error in the implementation of a deep stack of SFINAE-selected templates) rather than why something went wrong (a template argument fails to model the required concepts[13]).

Dynamic versus static is typically discussed as a binary, but there's more of a gradient where I identify four meaningful points to raise an error, though they're not quite strictly ordered:

  • Generic; errors are detected based just on the source as written, even if it's completely unused. (Nongeneric code in a statically typed language is trivially here, as are Rust generics.)
  • Instantiation; errors are detected as a result of instantiation of generic code with concrete types. (C++ templates and Rust macros fall under this category.)
  • Codegen; errors which occur late enough that optimization can eliminate them, like linker errors. (Rust const panics are currently (but questionably) here. C++ doesn't really have this as a concept; any program with such an error would be ill-formed, no diagnostic required.)
  • Runtime; errors which don't prevent the execution of code which "happens before" it.

Ignoring the problem child of codegen class errors, you can see it as compiletime (static) versus runtime (dynamic) errors, but with compiletime errors further refined into typesystem-compiletime (generic) and typesystem-runtime (instantiation). The thought of static typing is that by pushing errors sooner on this gradient, they can be preemptively detected and prevented. The thought of dynamic typing is that by pushing errors later on this gradient, more information is known and can be used to permit more code that avoids the error case dynamically. These different priorities map just as well onto the generic/instantiation split as they do the compiletime/runtime split.

Dynamic practices work great in smallish scopes / for smallish projects, because it's possible to hold the entire (or at least most of) the project's requirements in your head at once, and template instantiation depth (including auto prompted implicit templates) is typically rather low. But projects don't stay small for long (and even if they do, it's likely for you or someone else to come back for maintenance long after that context has paged out), and the required context just continues to grow. Especially when a component has generics/templates at its interface, it becomes increasingly difficult to constrain that needed context and keep instantiation depth shallow.

If I'm being extra petty, I can draw a line between C++'s template behavior and C's namespaceless ODR. In pre-ANSI C, C didn't even have struct field namespacing, and every field name was required to be different from every other symbol (type, function, or field); a field name was basically just an offset and pointer->field just an access at that offset independent from whatever pointer happened to be. (Pre-ANSI, C essentially was just a fancy single-pass macro assembler, before it evolved into being a "proper" language of its own.) C++ templates follow a very similar philosophy; while overloading, templating, virtual, and the various name lookup rules permit a name foo to refer to multiple concrete items, templates make an assumption that foo (at least when used in a given syntactic manner) always refers to the same semantic. E.g. that if a class has a size member, the class models some sort of container, and size is invocable on a const object of that class with no arguments and no preconditions, does not throw exceptions, and returns the number of elements in the container, equivalent to std::distance(begin(), end()). Defining a size member which e.g. computes the area of a shape would be considered incorrect, at least if that type ever finds its way into a template expecting an STL-ish container.

(If I'm being extra extra petty, I can claim that this is part of the reason non-STL C++ naming conventions often use TitleCase for everything: to avoid clashing with the semantic meaning implied by snake_case names used by the STL. Your type can't accidentally appear to conform to an STL template requirement if you never use the names it expects. I will also continue to marvel at the fact that C and C++ make the definition of any _TitleCase or __dunderscore identifiers ill-formed, no diagnostic required (UB), and that it took clang until 2020 to provide the option of warning on them, and IIUC GCC still doesn't.)

The entire purpose of Rust's trait-directed generics is to dismiss the structural typing used by C++ templates and embrace nominal typing; that if I write code using ExactSizeIterator::len, I get specifically that function, with that semantic, no matter the shenanigans[14] that downstream may be pulling. Specifically because of this breeds the confidence that "if it compiles, it probably works," as because we have this guarantee, the risk of something compiling but accidentally not providing the expected semantics (i.e. that would make the code work) is very low.

As an added bonus, because the set of functionality used by generics is declared and known ahead of time, it's possible to entirely[15] check that generics are well-formed (don't have any errors) before instantiation, while they're still generic. That Rust's generics are checked early, along with coherence restrictions and forbiddance of overlapping impls, isn't fundamental to the nominal type system, but they do significantly contribute to the confidence that the code works when it compiles.

Needless pettiness and pedantry

In more dynamic languages (e.g. I'm primarily thinking about JavaScript), significantly more unit testing is done, and TDD is more widely used (or at least held as an ideal). It's fairly common to see discussions around the use of (gradual) typing (e.g. TypeScript for JavaScript) pitted against unit testing (and sometimes TDD) as one removing the need for the other. (My pet "favorite" to observe is the choice between inferred and explicit return types[16], as well as the use of satisfies more generally.)

C++ templates being checked only after instantiation is IMHO exactly the same class of tradeoff (and I consistently come down on the side of strictly-strong types by default). It "doesn't matter" if templates aren't checked until terminally instantiated, because "of course" you have tests which instantiate them with (all?) the relevant types.

But in most cases, this is busywork that the compiler could be doing automatically for me, rather than me having to create a minimally modeling mock in order to test that a template instantiation succeeds without error. (Oops, I've basically reinvented trait generics now!)

My spiciest take here is that the concept of 100% code coverage comes from and is only really meaningful in dynamically typed languages, where you can write nonsense code that won't be rejected until executed[17]. In such languages, even mostly useless coverage still accomplishes something, in verifying that the code "typechecks" with the executed type. In a statically typechecked context, the static type checking accomplishes anything a 100% coverage requirement actually measures.

That's not to say that tests aren't still useful, nor that coverage isn't still a useful measurement (though I'd argue branch coverage is a better measure than line coverage); rather that testing trivial cases achieves coverage but doesn't really test anything other than that the types line up, which is a meaningless test if static typechecking already proved that, but is actually meaningful in a dynamically typed system, so 100% line coverage actually means something. (Still not that much, relatively speaking, but it is still something.)

I fully understand the benefits of having access to structural abstraction options; I've previously proposed the concept macro fn, which would act much like C++ function templates in that a) it behaves as function boundary/item after instantiation[18] and b) it enables generic types and the return type to be "wildcard bound," where any usage not satisfied by generic bounds (or dependent on such, transitively) has its resolution and type inference deferred until Instantiation.

What I take issue at is claiming that C++ templating is somehow objectively superior to Rust's trait generics, or that Rust generics "don't work." It's completely fine to be more proficient at and more comfortable with C++ templates; the two are very different and excel at different things.

C++ is objectively better at keeping things which look like they should be trivial trivial, via the use of auto templates. Rust requires significantly more buy-in in order to state your static requirements up front.

C++'s greatest weakness, however, is its fractal complexity. You can write simple code, but the simple code probably breaks in edge cases. Rust's greatest strength, on the other hand, is its consistency; if the generic code compiles, there (probably) aren't going to be any surprises when compiling uses of the generic.

Rust's trait generics are very functional, and the majority of developers using Rust are satisfied with the system. They can't do everything C++ templates can do, but that's fine. They work perfectly well for what they're supposed to — for writing code generic over a statically, nominally typed interface — and for when you do just want to stamp out a code template for some small, closed set of types, macros generating trait implementations is a perfectly fine solution, doing exactly what you want.

Finally, I'd be remiss if I didn't mention one last reason crates like midpoint or std might choose to use macro generated instead of generic trait implementations — (downstream) compile time performance. When an implementation in a crate is generic, it's recompiled by every[19] downstream crate (codegen unit). If an impls is nongeneric, (e.g. because it was macro generated for the concrete types) however, the code is compiled (and optimized) only once in the defining crate (up to inlining, of course). Performance is a very common reason for libraries to choose less convenient ways to write code, in any language.

Similarly, compile error quality is another reason crates might prefer macro-generated implementations over generic impls; if only a single trait with nongeneric impls is present, the error messages are typically fairly clear and straightforward. If generic implementations of the relevant trait are present, the compile error when a type doesn't satisfy the trait is more involved, because there are more potential reasons for the error and avenues to addressing it. Many libraries will take the choice to make their implementation a bit less convenient if it can improve downstream errors; by design, code compilation fails more often than it succeeds, and the primary responsibility of the compiler (and to an extent, well-engineered libraries) is to diagnose potentially ill-formed code (i.e. cargo check), and it's comparatively much rarer that the compiler is asked to build or test the code as well (and even then it still has the task of first checking the code).

  1. Being generic over arithmetic types is probably the hardest thing to be generic over in Rust. This is unfortunate, since the primitive arithmetic types are perhaps the easiest thing conceptually to want to be generic over. However, it's worth noting that in C++, implicit promotion to int makes it surprisingly difficult to get correct as well. ↩︎

  2. There's three potential ways of providing a resolution that I'm aware of:

    • Specialization: mark the potentially overlapping impls as overridable (i.e. default fn) and then provide a lattice impl for T: Integral + Floating to be used when both bounds are satisfied. If you set the lattice impl to cause a post-mono instantiation error, you've exactly recreated C++'s behavior. However, note this is only possible in this case because the traits in question are bound by 'static; specializing on (potential) lifetime bounds (a concept which does not exist in C++) is fundamentally unsound, as lifetimes fundamentally must no longer exist when selecting the monomorphization to use.
    • Fundamental traits: if traits could be marked as "finally sealed" (i.e. all impls are provided alongside the trait definition and no more will ever be added), then checking impl<T: Integral> can behave w.r.t. coherence checking as if each of the base impls for Integral were provided separately, effectively giving us the macro-stamped impls without the macro stamping. A variant of this functionality already kind of exists with fundamental types, thus my calling this fundamental traits.
    • Negative coherence: by providing explicit impls like impl !Floating for u32 and bounds like trait Integral: !Floating + ..., it's possible to inform the compiler that no types implement both Floating and Integral at the same time (that the traits are mutually exclusive), and thus that the overlap case doesn't occur. Explicit negative impls must be required, since normally adding a trait implementation is considered nonbreaking. Fundamental traits (and types) are essentially an automatic form of negative coherence, as they function as a promise that any currently unimplemented traits won't be implemented (but with a caveat for fundamental types, since their negative coherence, instead of allowing proving the complete absence of an impl, allows downstream to add impls of upstream traits when the type is "covered" (i.e. the impl is considered local if a local type is provided as a generic argument to an otherwise upstream fundamental type)).
  3. The solution in C++ would be to make the overloads no longer overlap by putting an explicit check of the negative of the other bound in the enable_if condition, then adding the lattice impl which requires both to be satisfied. I call this "barely" a solution since given two open type sets, there might not even exist a type which satisfies both available to the library to test and observe the overlap, despite downstream being able to define such a type. If the lattice impl was missed, then downstream has no recourse available other than to provide a template specialization for each concrete type in the overlap and can't even delegate to either upstream impl since there's no way to disambiguate between the two ambiguous overloads. ↩︎

  4. Of course, Rust's metaprogramming facilities are also relatively limited in comparison to C++'s, without good access to real specialization options (e.g. by C++ SFINAE or constexpr if equivalents), although you can get surprisingly close in function macros by (ab)using auto(de)ref and working with the trait system instead of against it. But the ease of metaprogramming tricks isn't what is under scrutiny; it's the experience of authoring relatively simple compositional templates/generics that just do some plumbing between some other (presumably generic) functionality; where auto-everything is enough in C++. ↩︎

  5. This is unlike C preprocessor macros, where it definitely can be argued that they should be avoided as much as practically possible because of how unprincipled and ad hoc they are with absolutely no knowledge of the language they sit on top of. ↩︎

  6. Of course, the other major reason is that C++, the STL, and most major C++ libraries are happy to cause unchecked UB at the slightest provocation, and have an unfortunate habit of the easiest and/or most obvious way of accomplishing something being the most UB prone. ↩︎

  7. I'm going to position C++ templates as weakly typed, as they error at compiletime upon instantiation with an insufficient type rather than at runtime, and don't allow any "properly" dynamic functionality like accessing properties not statically provided (i.e. from a subclass) but that happen to actually be present. But an interesting argument can be made that C++ templates are strictly-dynamic functionality rather than weakly-static. ↩︎

  8. The main exception to this where C++ is more strictly typed than Rust is that C++ has typed memory (TBAA); if you initialize some (sub)object in memory (how doesn't matter), you are only allowed to mention that memory as types std::byte, char, unsigned char, or with a "similar" type (i.e. to do so with any other type is UB), where a similar type roughly means the same type, another pointer type where the pointee type is similar, or another array type where the element type is similar, ignoring any const/volatile qualifications. Rust has exclusively untyped memory (although repr(Rust) layout being unstable puts a giant caveat on what you can do soundly) and is technically weakly typed in that way. ↩︎

  9. The main exception to this is, of course, macros, which work exclusively in the domain of syntax and are unaware of types. A more interesting arguable caveat is method lookup rules (and even more arguably field lookup rules), since while type-directed, involves coercions and asking what is imported to local scope, meaning the same syntax with the same input types can have different results based on context. ↩︎

  10. For complete fairness, you can eliminate coercions from a dialect of C++ by abandoning (most of) the STL and wrapping the primitives in a nominal struct type that doesn't provide implicit constructors. But I'm evaluating the language as it stands, and my main point is about templates anyway, whose weak typing (you're lauding and) can't be avoided. (Tag type dispatch can be and is used to mitigate the weak typing and achieve nominal (as opposed to structural) generic functionality like is provided by Rust's traits, but can't fully avoid templates' instantiation-time deferral of typechecking.) ↩︎

  11. Exempting those that were a "simple" nullptr deref in a dependency that should have been guarded against (and was in a latter version we weren't updated to) and was only difficult to diagnose because of symbol stripping in the production configuration but not having the issue (or complaining in any way) in the development configuration where it would've been much easier to diagnose. (Yes, I'm still salty at Major Company™ for shipping that heisenbug in Notable Production Software™ and not fixing it in the patch release (that we were using) released in tandem with the feature update that happened to fix it.) I'm lucky enough not to have run into any use-after-free style issues in C++ due to maintaining a Rust-like scoped ownership practice in the C++ I've done. ↩︎

  12. Because C++ is powerful enough that someone downstream could essentially always implement a type(s) specifically designed to satisfy the requirements for successful template instantiation. You could probably manage to construct an impossible instantiation by asking for coercion-resistant std::is_same_v to two different types, but I'm making an assumption that the template is to be uninstantiatable by accident, not by design. You could check that template substitution only relies on required concepts, but I don't believe continuing through typechecking would ever be practical because you can't predict explicit template specialization. ↩︎

  13. Concepts were supposed to improve this for C++20, and they do, somewhat. They're still purely structural, so they can only check that a type syntactically satisfies a concept, not that it semantically models it, and the actual templates are still entirely structural, but it still provides a benefit, since instantiation errors can say "why" and point "blame" earlier in the template stack, so long as concept requirements are propagated. Unfortunately, most compilers currently swing too far in that direction, and only say that the concept isn't satisfied, and not how (i.e. what's missing from the concept), so if a type is supposed to model a concept but accidentally is missing some requirement of satisfying it, you're left worse off than before. Additionally, because (IIUC) requires clauses place extra restrictions on template specialization that wouldn't be there without, it's at best difficult to utilize concept requirements for enhancing pre-C++20 template errors. These are addressable, but currently add to the friction between C++ and "if it compiles, it probably works" vibes. ↩︎

  14. There is, of course, the caveat that downstream can of course implement safe traits however they want, to do whatever (safe) shenanigans they want. But programming (typically) isn't adversarial; downstream can fairly be assumed to be attempting to implement traits according to their contract. But without something like the nominal trait system to tie name and semantic together, name collisions can and do occur accidentally, because words can have multiple meanings, and even then, meaning is contextual. ↩︎

  15. There's a class of errors called "post monomorphization" errors that only occur after instantiation of generics. It's simple enough to test if an error is being caught post-mono: if it shows up in cargo build but not cargo check. Rust does a reasonable job at preventing post-mono errors; originally, the only post-mono errors (notable enough for me to know about them) were "type to large for this target" and codegen/linker errors (i.e. after rustc has done its work). More recently (but still fairly old) it became possible to panic in consts, and using a generic erroneous const in a generic context is a currently a post-mono error. The stabilization of const blocks is being discussed, and will make such generic const panics more accessible; as part of that discussion it's being discussed how much it would cost to bring these errors forward so they're no longer monomorphization-dependent (occur in check builds) but still instantiation-dependent. ↩︎

  16. The general tradeoff is that with an inferred return type, it's the exact, fully specific type bound that's actually being returned; but with a specified type, it's an API contract between caller and implementation, but may conservatively overshoot the actual returned type (i.e. the caller may be required to handle cases that are actually unreachable). ↩︎

  17. In extra dynamic languages like bash, even syntax errors don't happen until execution crosses over it! And even in dynamic languages which parse the entire file before executing it like Python, imported files typically aren't parsed until the import statement is executed. ↩︎

  18. Notably, this means that, unlike functionlike macros, control flow (e.g. ?, return) is bounded by the function, that (final) monomorphization of the code happens at most once (per codegen unit) for each instantiation set, and that the code is subject to standard inlining heuristics instead of being unavoidably semantically inlined to every callsite. ↩︎

  19. There's an experimental feature enabling upstream to share its generic instantiations with downstream. However, even as it gets rolled out to stable, it can only help for instantiations that actually exist (so you probably still want the macro to splat them out over the closed set of common impls), and it can't help for sibling crates, only dependent crates. ↩︎


It's really interesting how you wrote large, detail and factually correct post and yet utterly failed to refute what I wrote.

I just wish you wouldn't hide the most important part in a footnote:

Yes, the majority of generics and templates in places where they work are not used for complicated metaprogramming tricks, but instead are used just to reduce amount of copy-paste. Metaprogramming in form of if constexpr is just very easy vehicle to make that happen. Most developers use them without even knowing there are complicated dance with SFINAE and all that is involved behind the scenes. They just write code that is easy to read and understand and it works.

We need to implement algorithm for integers and floats? No problem: use auto (and maybe couple of if constexpr here and there) and we are done.

We need to make single linked list? Let's make it generic just to make it easier to test.

Or we can make generic paginator. Easy-peasy.

The last two examples are not even invented by me, they are from C++ courses my friend is going to right now. And no, these are not about “advanced topics”, that's actually the middle of the course, the only thing that was shown to them before that point was how to add template to function declaration or type declaration to make it generic. Just part of the syntax, without even talking about what happens inside.

And when refute to that:


Do I even need to write anything more?

Yes, Rust is not C++, yes, that's fine, but also, sorry, but generics in Rust don't work.

At least they don't work for “simple” usecases, which people are trying to use them for.

And which are used much more often in C++ that “advanced” generics with type gymnastics or other such things.

You may argue that these “simple” usecases are anything but simple. And that they are not very interesting from type theory POV. And you may argue that macros are good enough solution for these case. Maybe.

But that still doesn't mean generics in Rust are fine and easy to write. Sorry, they are hard to write and that's just the truth. This:

Is entirely unjustified reaction.

The most important part of @CAD97 post is probably this:

C++ templates work great as, well… templates. Way to avoid copy-paste and merging similar implementations. But yes, they are working worse for interfaces. Concepts are supposed to fix that, but it's not yet clear how that'll work.

Rust's generics are awful for code duplication removal. Instead of “just slap auto or template” you need to do a lot of work to make them compile. So much that often it's just easier to resort to macros instead of trying to use generics.

They are much better as the interfaces, though, but that's entirely separate issue.

And yes, it's true that there are some rare wizards who, sometimes, can make Rust generics work and even do some amazing tricks.

But when that happens usual reaction when you look on the code is not “wow, I had no idea it's so easy to do”, but “wow, what an amazing circus act… maybe I should try that unicycle one day”.

1 Like

...isn't that the point, though - to have two different metaprogramming instruments for two different cases?


Maybe, maybe not. Rust today is very much like C++98 and like in C++98 deficiencies in generics are papered over with macros.

And that ill-fated talk what was cancelled in the end was supposed to address that problem.

If that haven't bed the problem already then there would have been no need to address is, don't you think?

That doesn't really make sense. The deficiencies of C++98 generics were all about the fact that they were too duck-typed, and there was nothing that C++'s macros could do about it. Rather, people discovered hacks like enable_if to workaround the deficiencies. It took 20 years of R&D effort to come up with a deliverable version of an actual solution: structurally-typed typeclasses aka concepts. And this "lite" solution doesn't even include the ability to write custom implementations (what was once called "concept maps") – your interface must have the exact shape that the concept requires, period.

Rust obviously didn't want to repeat C++'s folly and opted for a system of nominally-typed typeclasses instead. Rust also made macros a first-class feature, meant to be actually used, rather than a slightly embarrassing purely textual preprocessing step that exists solely for historical reasons.

1 Like

Actual solution which made things work is decltype in C++11 and if constexpr in C++17, not concepts. That is what made templates joy to use as copy-paste reduction tool. And that is still without any analogue in Rust.

Concepts are solving much more theoretically interesting, but also much less practically important problem of how to detect contract violation earlier.

Yet, by doing that it have put cart before the horse. Now contract violations are easy to detect, but copy-paste reduction is relegated to macros.

Given the fact that Rust macros are much safer than C/C++ macros this maybe an acceptable trade-offs, but that doesn't make generics any easier to write in Rust.

I'm not sure that's good enough reason. Yes, macros in Rust are more advanced than their analogues in C/C++, but, ironically enough, debugging and fixing macro-based templates is harder in Rust than template-based templates in C++.

And you can not complain about cryptic and hard-to-fix error messages from C++ templates and then pushing Rust macros as a solution without also noting that error messages from these macros maybe even more cryptic and hard to fix.

Well, my actual experience has been that c++ templates were always a pain in the butt and exhausting, and you're trapped in a bottomless pit of dispair as soon as you take one step outside of a small golden zone of sightly less bad textual substitution; while traits are nice clean simple tools that pretty much always do exactly what I want (Not to mention all the Rust code that's been happily using traits to great effect out there...?)

Saying "they are hard to write and that's just the truth" just isn't particularly compelling in the face of my actual experiences.

So far, your argument has seemingly been "there exist things traits can't do, and therefore they're completely (or nearly?) useless", and that doesn't make much sense at all to me.

Maybe there is a point about how there's a bad trade-off being made or a missing feature or something here, but so far I haven't seen it?


You are saying "generics in Rust don't work" because it takes more effort to make them work, or, occasionally, there are thing which cannot be done using just generics.
And then you provide examples of generic C++ code, calling them "easy-peasy". But one misstep and they produce 8 pages of compiler errors: Compiler Explorer.
Which opens the question, was it really easy-peasy to write? Or maybe it was, but the author just tested with one obvious template parameter and didn't really exercise the generic part of it, shifting the burden on consumers of this "generic" function?


My argument is different: when each week (and often more often) people ask about how to do something that, as they expect, would be simple, but in reality is not, when people are using macros for what, in other languages, would be done with generics… and even apparently knowledgeable people here, with use of third-party crates offer solutions that don't quite work as-is right now that's, pretty much, by definition “generics don't work”, or, you want to be pedantic “generics do work, but more like circus act than something Joe Average may use”.

That's a problem. I'm not sure how that problem can be solved, but it's not constructive to lash out on people facing these problem with accusations of incompetence and claiming that people who are facing these problems just “don't posess enough experience in Rust or strongly-typed languages to be in a position to criticize it”.

One doesn't have to be skilled cook to know when something tastes bad and one doesn't have to be skilled language designer to see that something works poorly in the language.

The question of whether that thing that “works poorly” is bad enough to warrant changes and potential breakages in the other places does require the language design experience to answer.

And it's possible that generics as they are designed in Rust is the best way of doing things… but I'm not convinced.

For one simple reason: if the official answer to the question of how am I supposed to reduce code duplication is not “generics” but “macros” then we now have to compare these to C++ templates.

And I'm not happy to say that, but Rust's macros are even more fragile and error messages are even more cryptic than C++ templates. At least to me.

In this case it may be lack of experience, though. I wonder how other people are feeling about them.

Especially when there are many levels of macros involved. That's the common accusation for C++ templates, after all: yes, templates are easy if you have one or two levels, but if you have ten levels it's a nightmare… well, I don't have enough experience with Rust to face ten levels of macros, but I suspect debugging these would be even bigger hassle than with C++ templates.

If I say “generics in Rust don't work” then I mean precisely that. In places where in C# or Java or C++ one would expect (and find) generics or templates in Rust would your variably find pile of macros and not generics.

Only in rare cases where the goal of what you are trying to do aligns nicely with type theory you would see generics.

That's not an accusation or complaint, that's just observation.

And you just need to read the first few lines to create a fix. My experience with macros in Rust (and as we have already discussed templates in C++ have to become macros in Rust) are much less understandable. Even cargo expand doesn't always help because when macro refuses to accept your code it doesn't produce garbled output which you can traverse (like happens in these 8 pages of errors you included) but it produced nothing and you have to imagine how and why macro expansion fails in your head, instead.

Oh, absolutely. Templates are not easy to use in C++. And generics in Rust, in rare cases when they work are much better. But most of the time generics in Rust don't work. You have to replace C++'s templates not with generics but with macros… and you have shown why with your example: the way to make template work was obvious from the error message. I couldn't say the same for Rust's macros.

1 Like

Moderator note: This conversation seems to be going in circles.

If you have any proposals for improving Rust generics, open a pre-RFC on the Internals forum. If you would like advice on how to work around Rust's deficiencies, open another topic here.