Why not just add classes?


#21

There’s quite a couple of very successful languages that do very fine without macros, for example Ruby, Python, Java, PHP, JavaScript and C#. None of them are known for lacking expressiveness. Especially modern Java lost a lot of the “verbose and boilerplaty” air around the language. It is definitely possible to write richer core languages that allow for all these things without macros. And not all of them are built by huge teams.


#22

Sorry, but adding languages as dynamic as Python to this list makes absolutely no sense. As an example, Python’s metaprogramming facilities are numerous (generating and executing code at runtime being among them), there’s absolutely no need for macros there. So please don’t compare apples and oranges.


#23

I disagree. Languages provide access to the underlying runtime features of a language. Python provides convenient access to all runtime features of Python, one of them happening to be runtime code compilation and extreme late binding.

Obviously, there is no feature parity on the runtime level here, but the Rust language and Haskell languages do the same: they provide access to their underlying runtime features. The fact that a macro language is involved at some point (IMHO) points to the fact that they lack the convenient part in the core language.


#24

Rust and Haskell have a type-system. Every function must have a type and is restricted by that type. Some kinds of things (derive, format!, variadics - e.g. vec!) do not fit that system well. Therefore you need to use macros to metaprogram over them.

Of course, some macros are because we can’t be generic over some things - integers, mutabilites. But that does not mean they all are.

Python allows you to do all compile-time things at run-time. so you don’t need a macro system. Even C#/Java are more dynamically typed than Rust (they have Object and reflection).


#25

Moderator note: A gentle reminder to all: the Rust forums aren’t the place to do language bashing. Constructive comparisons are of course a-okay!


#26

But that still points to a hole. I could still implement things like format! by hand. (tediously, though a chained API)

(Python and Ruby have “a type-system” too, by the way. They just bind very late and don’t enforce at compile-time. I would even argue Ruby has as many implicit conversions as Rust (almost none)! For a lot of thinking around this, I can recommend the following text: http://www.ics.uci.edu/~lopes/teaching/inf212W12/readings/rdl04meijer.pdf)

But many are. Look at all the parser combinators like nom. They are pretty much huge code generators avoiding a lot of busywork. Serde has the same: it employs macros to avoid defining and hand-crafting the components needed for proper deserialisation to make the easy case easily accessible and hide the complexity.

I disagree on the point that you don’t need a macro system because of the dynamic nature of the runtime system. I’m pretty much aware on where C# and Java fall on that scale, but for example the dynamic nature of Java is mostly reached through late binding together with reflection, an approach that is very much possible with Rust through dyld and looking at symbols of a loaded library.


#27

I was using “type-system” in the compile-time sense - each Rust expression must have a type that the compiler infers. This means that format must have a type. But format can’t have a simple type (because it is variadic etc.), so it has to be a macro (or you can do the “array-of-Object” translation, but that loses you things).

From what it looks like, nom mainly uses macros to allow for top-level type inference - that’s certainly another annoyance I forgot about. Many parser generators in other languages use reflection, which I’m not sure is better than syntax extensions - at least with syntax extensions you have the type-checked post-expansion code.

dyld can’t interact with types.


#28

Well, I got this problem playing with the hoedown bindings: it’s pretty easy to use the Html renderer, but let’s say you’re globally happy with it but just want to change the way it handles, say, paragraphs. You have to custom struct (ok, seems unavoidable):

struct MyRenderer {
    html: Html
}

then implement your modified version for the paragraph function of the hoedown’s Render trait:

impl Render for MyRenderer {
    fn paragraph(&mut self, ob: &mut Buffer, content: &Buffer) {
        //stuff
    }
}

Ideally, that would be all. But in the current state, you also have to manually delegate each of the other methods of the trait to the html field of the custom struct, which isn’t that hard:

fn emphasis(&mut self, ob: &mut Buffer, content: &Buffer) -> bool {
    self.html.emphasis(ob, content)
}

Except you have to do it for something like 30 methods. Which quickly becomes a bit tedious and makes me wish there was some option like this RFC for automatically delegating trait implementation.


#29

You’re right there, following the trail of the RFC you linked looks like it’s in progress though. I’ll be very happy once that’s in.


#30

Indeed. And how the language designers implement metaprogramming facilities is everything. By all accounts Rust has very good PL people deciding what’s in the language, which does make me optimistic about the future of Rust. Though C++ acts as a warning as to what happens if you get metaprogramming wrong. They did a terrible job of it and of course it didn’t affect the adoption of modern C++ at all everybody just licked their chops and said “Yea! More ways to right byzantine code! And we get page after page of worthless stacktrace when something goes wrong!?! Sweet!”. I guess at core the thing that is drawing me to Rust is the potential to get C++ level performance and flexibility without the “design by committee”, ball of mud C++ as become.

In my opinion macros are unnecessary. I know that ship as sailed with Rust but I hope that whomever decides (the fewer people the better) what’s included in Rust appreciates that just because something is “useful” does not necessarily mean it needs to be in the language (Ken Thompson is absolutely right about Stroustrup–he doesn’t know how to say “no”). Trying to please everyone all the time is just a recipe for disaster.


#31

Re: JS - There’s an AltJS language called Sweet.js that is a superset of ES5 with the addition of macros. The macro system is similar to the one used in Rust IIRC.

I find it useful for getting around pain points in the language. Especially writing test code that doesn’t have a lot of function () { boilerplate everywhere.


#32

Subclassing (i.e. virtual methods) is an anti-pattern that is (and often a highly destructive) premature binding.

  1. We can’t express relationships due to invariants of the Liskov Substition Principle, e.g. the Square and Rectangle relationship.

  2. Expression Problem as framed by Wadler. We can’t implement a new method on an existing type without adding the method to the preexisting type. Thus for example, if some library returns an array of Widgets and we want an array of FastDisplayables, and we can’t edit the source code for that library and recompile, then we have to create a wrapper class and rebuild the entire array. Whereas, by separating implementation of interface from data, we can reuse the data and create a new interface to “view” that data with. I claim Rust has ad hoc polymorphism similar to Haskell’s typeclasses, which is one the main features drawing me to investigate Rust.


#33

I like the gist of macros in Rust, but I would like them to become more like macros in Lisp.
That would be really great :slight_smile:

The reason why macros are great - and needed - is that we can use it to extend the language (add new syntax) - and as such, I don’t view macros as a kind of hack.
To the contrary: it is one of the niftiest features of Lisp.
In Lisp, you use macros to write a (new) language specifically tailored to the job at hand.


#34

You are right. Rust’s traits are absolutely an implementation of type classes, though there are some differences from Haskell’s implementation.


Rust as a High Level Language
#35

Thanks for the confirmation. I was hoping to get some confirmation. I am surprised afaics the ad hoc polymorphism is not mentioned any where in the documentation. And the Expression Problem nor the Wikipedia entry on Composition over Inheritance, are also ostensibly not mentioned in the documentation. For me, if considering Rust as potentially a better “high-level” language, the ad hoc polymorphism in a language which does not have Haskell’s coinductive type system, seems to be unavailable in any other C/C++ derivative (potentially mainstream) language?

I’m suggesting the documentation maybe could be improved to proactively explain to incoming OOP (aka subclassing) converts, to make an argument for why they typically don’t want to be using the anti-pattern of OOP virtual methods, and instead using late binding dispatch at the call site, instead of at the declaration site. In other words, ad hoc polymorphism un-conflates (makes) interface from (orthogonal to) data type, and the binding of the interface to a data type occurs at the function call site, not at the data type, interface, nor function declaration sites. Of course there are some tradeoffs, but the inflexibility of premature binding is removed.

For a mainstream high-level language, I am starting to contemplate if I am wishing Rust’s ad hoc polymorphism was available in a strongly type language that had no verbosity GC and didn’t basically force on us by default the noisy and complex type system of modeling lifetimes and memory (which apparently even infects generics with the 'a syntax … I haven’t learned that yet though). The lifetimes and memory allocation feels too heavy (a PITA) for a language that most programmers would want to use most of the time. Sometimes you want that control, but always by default? And a mainstream language without first-class (i.e. not a library) async/await is becoming an anathema.


Rust as a High Level Language
#36

In general, we try not to compare Rust to other things, only explain it on its own merits. I think some other supplimentary resource that does this would be interesting to read, though.


#37

Let us not forget this section on Rust Learning where Rust gets compared to a lot of languages: https://github.com/ctjhoa/rust-learning#comparison-with-other-languages :slight_smile:


#38

I was about to start another discussion about this, but from a different angle;

I’d start with the pragmatic observation: there is a lot of existing working software that can take internal-vtable based plugins,

… and ask if there’s a way that the trait-viable mechanism could be generalised to the level that the vtables could be anywhere ((i) fat pointer, (ii) embedded as the first item of a struct, (iii) or calculated by some generic means from the address and other information in the struct - imagine for example being able to retroactively get from an enum tag to a vtable. apparently some java implementations don’t actually store a vtable in the object like C++ but rather store them at the start of contiguous arrays of homogeneous objects… thats actually quite interesting. maybe a way to leverage the MMU, mapping ranges such that calculating the vtable ptr is very easy.

Let me refresh my memory, I think there was some highly unsafe hack that almost allowed internal vtables… I remember some messing around with cast::transmute and the deref traits , which got to one level; the question would be ‘could extra language support allow this to be done in a less unsafe way’ - exposing a way of binding a vtable with the underlying data, or querying the vtable of a trait on it’s own.

(I can’t remember off hand if rust even has plain function pointers these days, i realise you could roll them with that)

r.e. “classes”, I haven’t tried but i’d imagine you can make macros that declare things more concisely.

tangent;-

whilst rust macros are really useful, macros generally spike a negative reaction in my head on 2 fronts:-

  • it might be C prejudice, but the idea that they are a ‘hack’
  • further justification: language features can be reasoned about at compile time and get much better error messages; they can be written using the rest of the language’s syntax, as such I think ‘a more powerful language’ is a more efficient use of the whole world’s resources. when using macros, it’s usually for something another language can do better with a ‘more solid’ feature.

-plain syntax issues which might be fixeable through other requests:
the extra nesting level and disruption of existing declaration pattern really bug me (i wish it didn’t). It’s because Rusts’ “pattern” of declarations is

defining_keyword name {
    content 
}

you can’t replicate this with a macro; as such macro-base declarations stick out from the rest of the language like a sore thumb.

( I know lisps make great use of macros but there everything looks the same…)

if you could write

class!  MyCppStyleObject {
    ...
}

that might help a lot perceptually (a preceding ident in macro-rules?)… but I’ve no idea how feasible it would be r.e. parsing though (maybe it would be asking too much to be able to allow anything between the ident and the ‘main body’, as we’d need if we wanted to write class! Foo : Bar { …}

back to the topic of the thread,
r.e. the mention above that these vtables are an ‘anti-pattern’ (premature binding?) I think it’s a case of “not throwing out the baby with the bathwater” … the fact is they can still be useful… there’s a sliding scale of runtime efficiency, syntactic convenience, versatility . I would personally like to see a sliding scale of versatility where you don’t have to bake any one path early on into the syntax. ‘openly derivable classes’ can still be used for code that queries the type (e.g. if (auto *p=dynamic_cast(x)){ … } etc etc) . you could imagine ‘rolling an enum’ being like ‘declaring a base class and a fixed set of derived classes’, and you could use it both ways (e.g. still derived new classes, still make vtable entires for the enum…). (imagine, coming at it the other way, if you could generalise computing the tag of an enum… such that the tag could just be an internal vtable pointer … an enum could even implement itself as a class by doing a whole-program ‘gather’ of all the useages?)
Such blurring of the options might have utility in refactoring and in experimentation, e.g. a sourcebase might go through various options until it discovers the right layout


#39

Classes irk me, and I come from C++. At the same time though, when I want to “inherit” fields, it does feel odd to do something like this:

struct Student
{
    base: Person,
}

What would be nifty though is syntax like the following, where I don’t have to specify student.base.name but rather student.name:

struct Student
{
    self: Person,
}

Maybe multiple base structs could work? That is, provided there’s no field name clashing, but I see this as an extremely rare occurence:

struct Pegasus
{
    self: (Bird, Horse),
}

#40

I agree with this (although I’m also open to the suggestion that inheritance can be improved on, the hierarchical idea is limited),

I’ve always seen single inheritance as a low-level feature - something that is possible in ASM because you know the layouts can just overlap; so you have a neat, efficient shortcut for a tree of variations with common ‘earlier elements’ …

one suggestion I had was to use ‘Tuple Structs’ to do some of the job of field inheritance.

a Tuple Struct has no named fields; but it does have an order.
imagine then if any attempted named-field access to a tuple-struct actually searched it’s components, prioritised according to the order in which the types occur.

but ultimately it would be better IMO to go with principle of least surprise and allow the straightforward struct Foo : Bar {..} syntax, and maybe figure out something logical to do with struct Foo : Trait {..} (sugar for impl Trait for Foo inplace, the case of a struct having at least one trait… so make it easier to roll that). IMO struct Foo : Bar{} happily fits with the rest of the type syntax… and it’s extremely common in other languages; let x:Bar - “x is of type Bar”… struct Foo:Bar - “Foo is of type Bar”

Many people say C++ has mis-features, but IMO it’s that just an omission of alternatives that mean it’s ‘core-features’ must be stretched in awkward ways (no multiple-dispatch -> use verbose double-dispatch to fake it… but that doesn’t mean the inbuilt single-dispatch was a bad idea, in the case when you do know a certain set up front.)