My dream language

I was thinking, probably not a good idea, but it would be nice to have a language that produces binaries that run as fast as Rust, the same execution model ( no garbage collection etc ) but took care of a lot of the messy details automatically for you.

For example, instead of having to clone explicitly, choose &, &mut, Arc, Rc, RefCell, Mutex explicitly, somehow the compiler could work out which kind of pointer you need automagically. Also, implement Result-type error handling automagically (so you see an exception type system, but under the hood it is C-compatible if that is what is required ). Or something... anyway, I just woke up in the night and had this strange thought. Perhaps this language could even be compatible with Rust, a different "more intelligent" front-end.


So you want no garbage collection, and you simultaneously also do want garbage collection.

See, the thing with Rust is that all these "messy" (not really) details matter a whole lot wrt. the semantics of the program. If you start automating these away, you will inevitably have to be more conservative than necessary, and that will come with a performance hit. The compiler can't "just figure out" what you want; that's essentially asking it to read your mind and write the code you wanted to write.

We already have languages where everything is an Arc<Mutex>. They can hardly be described as "smarter than Rust".


Ah, yeah, it will also make my espresso in the morning. I can't be bothered to think about all of the inherent complexity that goes into pulling a good shot.

All joking aside, I think the real endgame for programming languages is going to be fully automated by AI to the point where a human writes tests (or specifications) and the AI writes the implementation directly as machine code without going through a human-readable intermediate language like Rust. So that you don't have to concern yourself with inherently complex things. My prediction is that this technology is a good 40 years out, even though we are seeing some of its nascent beginnings right now with ChatGPT and Copilot.

1 Like

I think it is possible in principle for the compiler to figure everything out, so there is no performance hit. This is not an existing language, it is a "dream language". Maybe it will never exist. I don't want a language where everything is an Arc - Mutex. I just don't want to be having to writing the pointer data types out explicitly. Incidentally, should Mutex do deadlock detection, at least by default or in debug mode? Is that expensive?

Unfortunately, no. (see especially Rice's theorem.)


I don't normally post images or memes to this forum, but this is just too good an opportunity to pass up...


I think you underestimate the value of lots of these annotations.

Knowing that when I pass &Options to another function I don't have to worry about that other function changing it is incredibly useful to me as a human. The languages that are just "meh, it'll be fine" are far more frustrating since I have to worry about that Options object that I passed into a webservice call in C#, for example. Hopefully it's not going to change it, but I don't know that for sure. Maybe it will, and thus I have to clone it before I pass it to protect against that -- oh wait, now it's the lack of annotations that are making me add cloning by hand. I'd much rather write & than have to defensively clone stuff all over the place!

Could there be a simpler rust if you said "look, it'll always have a runtime and be 20% worse in CPU usage and RAM usage and such"? Sure. But it wouldn't remove annotations for "I promise not to modify that"! If you lose those annotations, you lose most of the good things about Rust, like its race prevention.

(Said otherwise, we already have languages where you don't have to pick because everything's an Object that can be locked, there's no &-vs-&mut-vs-Arc because everything's GC'd. And you're not using Java, so I think that's less what you want than you think.)


In the "dream" language, I don't want to change the requirement of specifying which things are mutable. Anyway, it was likely just a silly dream, strange things happen when I am asleep!


Unfortunately, and I was sad to learn this during my undergrad because you do encounter notions that certain things work via “compiler magic” or some variation of the term so obviously I asked around about that and, there is no magic in Computing everything is a trade off.

*Quantum computing non-withstanding.


We already have programming languages that nearly achieve the speed and safety of Rust while handling many intricate details automatically, similar to Rust's execution model without garbage collection.

A prime example is Nim, which adopts the new ARC/ORC memory management and incorporates move semantics. Nim combines the simplicity of Python with the safety and speed akin to C or Rust. However, it appears that Nim is often overlooked for various reasons. Having used Nim for nine years, I recently began exploring Rust. While I appreciate Rust, I'm uncertain if the additional effort required for explicit ownership truly justifies itself. I'm in the process of converting my small chess engine with a GTK GUI from Nim to Rust. Completing this project should give me a better understanding of the complexities introduced by Rust. Personally, I find using curly braces instead of significant whitespace (as in Python) and terminating semicolons a minor inconvenience. However, I feel that Rust's syntax is less intuitive compared to Nim's, especially with the absence of var and const blocks and the requirement for individual type annotations for each function parameter. The lack of default parameter values and subrange types (like let mut month: 1..12) seems trivial. I'm still forming an opinion on the absence of exceptions and inheritance, and the frequent use of option types in Rust. Nonetheless, Rust's open-source nature, coupled with its welcoming community and leadership, is crucial for me.

Lobster is another language with a similar memory management strategy, though it's relatively obscure.

We should also keep an eye on Mojo and Swift. Mojo isn't entirely open-source, and Swift is closely tied to Apple. Still, they might offer some inspiration.

[Text was corrected and grammatically tuned by GPT-4 for better readability.]

And for AI: I assume that in near future AI will be able to transfer code from one language to nearly all other languages, and even write most code for us. So we may start in Python, and later convert to Rust with no effort.


It’s my impression that Swift takes inspiration from Rust but that’s entirely anecdotal.

I'm going to assume that for a long time no machine intelligence is going to be smarter than an actual bunch of educated, experienced, motivated humans.

So effectively we already have that "end game", except we are simulating it with human programmers rather than some hypothetical machine. They are equivalent, right?

So somebody has some notion of some requirements in their head. They write it down, provide a bunch of diagrams describing it, generally a specification. Typically they are customers, clients, my boss, whoever. They know nothing about Rust or any other language, they don't see or care about any intermediate form of the solution. They don't have to concern themselves with any inherent complexity of things. They just get a machine code executable back.

Well, as you have probably noticed, the likely hood such people get what they actually want back, that works reliably and performantly etc is almost zero.

To fix that they have to provide a very detailed requirement, they have to do a lot of backwards and forwards iteration with that "coding black box", in this case a bunch of humans to pin down what they really want. They have to provide very detailed tests of the behaviour they want. They end up having to be concerned about the inherent complexity of things.

It's not clear to me that replacing humans with machines in that "coding black box" is going to save those people anything or produce a meaningfully different result.

Well, apart from the fact the machine never sleeps and likely does not cost so much to run as a bunch of human programmers. But logically nothing is gained.

Ultimately the code humans write its the detailed expression of the requirements. It's not clear to me that making that non-human readable is a benefit to to the clients. I can see how it would have a lot of down sides.

Anyway, in short, the "end game" of a "code producing black box" as you describe already exists, except it's humans in the box not transistors, and see what difficulty we have with it now!


There is an argument that the best language for many problems would be one that’s domain specific. But obviously that’s not the way teams approach problems.

1 Like

These are superficial details that have nothing to do with the high-level goal of a type system. Requiring type annotations is not because of "performance", it's an intentional design decision for the readability of the code. The same is true of many of the features of Rust's type system.

People who like to complain about how "the syntax is not intuitive" are usually those with very little actual experience in Rust, and (consequently) those not yet having grasped its real value, or the bigger picture as to how the features of the language co-operate to provide a coherent and unsurprising experience.

1 Like

I was precisely reading this speculative idea called profiles: Easing tradeoffs with profiles · baby steps.

I prefer the explicitness of the current Rust, of course, but I can't deny that such proposal could lower the barrier for new programmers coming to Rust.


It's actually worse than that. You see, even if the machine can parse what you say perfectly¹, it will still have to contend with the same ambiguities in speech that another human has to. Which means that if this comes to pass, expect to answer a lot of ambiguity-eliminating questions from your CodeMonkeyGPT.

Then there are things like tradeoffs, which won't simply disappear just because a bot is now the coder.
Hoe do you want to manage those, keeping in mind that it's not always a one-and-done i.e. such tradeoffs can be ongoing concerns?

I don't mean to offend, but magic is no more than an artifact of the mind. Once you understand how something works², that magical feeling...magically disappears!

Oddly though, the old quote by Arthur C Clarke ("Any sufficiently advanced technology is indistinguishable from magic.") doesn't apply here, since understanding is a predictable dispeller of that feeling of experiencing magic.

¹ Something that in 2023 is a mere dream.
² That holds true for magic tricks performed by magicians, but also technology.

1 Like

You are absolutely right, and I consider syntax generally as less important, and I mostly like the Rust syntax.

Many people assert that using curly braces to define scope and semicolons to terminate statements is superior. However, my perspective is that those who have spent a considerable amount of time using both Python and C/Rust tend to develop a preference for syntax that is based on indentation. An interesting development in this context is Scala's adoption of significant whitespace in its third version. Martin Odersky, the creator of Scala, has mentioned that this change enhances overall productivity by about 10%. This information is also discussed in the context of the off-side rule on Wikipedia. Wikipedia's Off-side Rule Page offers more insights into this topic.

I for one certainly don't. I don't even remember when I started writing Python, but it must surely have been more than 10 years ago (since I knew the language well before I went to university). I'm using both Python and Rust regularly at work (although writing Rust code for work is less frequently needed). I used C and C++ before having switched to Rust. I always preferred explicitly delimited scopes, although not primarily due to readability; I think Python is fine to read in terms of surface syntax.

Whitespace-sensitive code, however, is by its very nature sensitive to copy-pasting and refactoring, making it a lot more error-prone than what would be strictly necessary, especially when paired with the lack of typing. I can't help but develop an innate fear of breaking my code every time I insert a block of code in a loop, conditional, or function, or pull it out. It has happened to me several times that such a change broke my code, but not in a noisy way – no errors, just wrong but plausible results at the end of the pipe. That's the behavior I consider basically the single worst possible kind of error I can make in a language.


Human time is expensive, machine time is comparatively inexpensive. Any gain from automation can be expressed as exchanging time spent by humans with time spent by machines.

Clients don’t write programs any more than specifications are programs (despite the implication in the comic above). The specification is the “what” and the program is the “how”. Documentation vs implementation.

A programming language is probably best classified by its level in an abstraction hierarchy. Machine code being lower level and containing very little (if any) abstractions for humans. Assembly is a dialect of machine code which humans can read to some extent. Macro assembler dialects add some abstractions like ASCII strings, and macros to reduce common boilerplate. C adds some abstractions like functions that hide the details of calling conventions and subroutine prologue and epilogue. Rust adds more abstractions like iterators and lifetimes.

This is all leading up to extrapolating the next layers of the abstraction hierarchy into the future. And at some point, the language will be so abstract that it will basically be “write the test and the compiler writes the implementation.” As far fetched as it may seem.

60 years ago, mobile phones were predicted as a staple of science fiction in the form of Star Trek communicators and ansibles. At the time, the highest level computer languages were FLOW-MATIC and FORTRAN. Today I am writing this on a phone, including looking up references because I can’t for the life of me remember names like FLOW-MATIC when I want to point out that “English-like syntax” languages have a vibrant and rich history. And I can use ChatGPT to write a function or test just by describing the task that I want it to do.

The compiler writing the implementation is still science fiction because ChatGPT gets things wrong too often, and the prompt almost always asks for the code written in an intermediate language instead of optimized machine code. But it does support the hypothesis that this is a step along the extrapolation roadmap.

Suggesting that AI or compilers will always have the same limitations that they have now is overlooking the fact that technology evolves. Limitations that were endemic decades ago have already been solved. And while some problems are intractable, not all of them are. Specification bugs will always exist, even if logic bugs can be eliminated by formal verification.

“Do what I mean, not what I say” is the dream programming language. I think it will always remain a dream. But languages will evolve and gain new abstractions that seem a lot like magical super powers if we try to imagine them today. And many of them will inevitably be powered by AI.


It's intended to be. Not sure of the timeframe though but I assume no later than 1.0.