Rust as a High Level Language

Ignoring any caveats about library support as this is a discussion of the merits of the language as it is designed, Rust is one of the most suitable languages for high level application development in use today. The reasons are many.

Baseline and Commonplace Reasons:

  • Statically typed. This catches many errors almost immediately, before the application is ever run, and is a boon to refactoring as the type system will tell you all the different places your program needs to change when you make small changes to your program.
  • Generic. This allows you to write your algorithms once and let the compiler parameterize the function to any type you might need it for.
  • Well designed module system that never gets in your way

Uncommon Among OO Languages:

  • Algebraic Data Types, aka structured enumerations, which express certain types of logic and data structures more efficiently and more expressively than complicated class hierarchies and ad-hoc polymorphism
  • pattern matching, which is an expressive way of binding names to parts of a datastrucure hassle free and also expressing highly complicated "case-switch" logic
  • immutability by default, which is much more often the right thing rather than the wrong thing
  • first class functions and closures, which are much more expressive ways to do a strategy pattern and single-method interfaces
  • bounded parametric polymorphism through traits, which is when you impose interface-like constraints on generic type parameters. Traits and interfaces are similar, but certainly different in a few key ways like...
  • Multiple dispatch. Implementing a trait based on 2 or more types in combination instead of just one.
  • Associated types, which are used to express type families that implement an interface. This will often greatly reduce the number of generic parameters. For instance, in languages with generic interfaces, one must write a type parameter for abstract type, but in Rust, you can often eliminate them all if the type implementing the trait uniquely identifies the rest of the associated types.
  • Slices, which are a cheap way to partition a continuous segment of an array. These are sorely missed when I use other languages.
  • Value types. Sure, you say performance is not important, but the fact is that having this kind of control over memory layout and passing by value never hurts in the off chance that you have a hot loop or something else that needs to avoid the indirection.

Uncommon in functional languages:

  • Mutability
  • Value types
  • Unboxed closures
  • Associated types (this is an extension in Haskell!)
  • A great macro system. F#'s and Haskell's are... Less than savory.

Things Unique to Rust:

  • Unaliasable mutable pointers. Usually, either the language screws you over by making everything immutable, or it acquiesces and lets in all the pain of pointer aliasing. Rust treads new ground and gives you mutability without spooky action at a distance.
  • Memory safety by default, even in concurrent settings. The compiler will never let you cause a data race without unsafe code, ever.

I don't know why you could say that Rust isn't suitable for high level application development. Want more proof? Rust and Swift are almost identical in feature sets (Swift maintains some OO features and exception handling while it doesn't have all the same safety that Rust does) and Swift is the new application development language from Apple.


I also question your notion that Rust forcing you to "think explicitly
about scope and state" will improve the chances of the result being
"understandable". Bad programmers will find a way to write bad code in
any language.
But forcing the good ones into a box that is rigid in
order to avoid garbage collection does not strike me as a sensible path to
readable code. (Remember, we are talking about using Rust as a "high level
language" in this thread. I don't quarrel with the basic idea of Rust as a
low-level high-performance language designed for applications where garbage
collection pauses can't be tolerated.) I think it's far better to give
skilled programmers the tools to write well-organized code, as Scheme and
Haskell do, without forcing them into a rigid model. Programming is an art
and there are many ways to do it well, just as there are many ways to play
Bach well.

I think we have such a deep and ontological difference in opinion that it
may not be possible for our conversation to make progress,

I think you are right, though I must say that iin a long life I don't think
I've ever had a difference with someone who referred to that difference as

so I'm not sure how far we should take this. But I absolutely do not think
it is true that bad programmers are the reason there are bad programs (in
reference to the portion of your post I have emphasized).

What is more accurate is to say that bad programs are the result of bad

That is a pretty amazing statement. Have you run software projects? Worked
with a wide variety of programmers? Run big software projects? I have, over
an almost 50 year career. As with tennis or piano playing, there is an
enormous range of skill among programmers. I do not consider C to be a
"good language", but I have seen some beautifully organized, beautifully
crafted C programs. And really bad programs written in "good" languages.
And I will make the argument that in my experience, the determining factor
in the quality of a program is the skill of the programmer. And it is at
the architectural, organizational and algorithmic level that these people
excel. Writing small programs is easy, so that's not what we are
discussing. Writing very large programs is extremely hard and few have the
ability to operate well at the forest level and organize them properly.
Compared to this, the language used is less important, but not unimportant.
Just as the tools used to build a house are important, but not as important
as the work of the architect.

but even then I am unhappy saying that some languages are "bad" rather than
saying that we are all experimenting with finding ways of expressing our
programs so that they are maintainable and extensible under changing
circumstances. Language features matter, much more than human skill in my
opinion. You say so yourself only sentences later, when describing Haskell.

You are completely mis-interpreting what I wrote. What I wrote has the same
meaning as the paragraph I wrote above.

I am also extremely surprised that you find Rust's ownership and
borrowing rules so much more restrictive or rigid than Haskell's rules
around purity and referential transparency. Its inconceivable to me,

Have you done any real implementation in Haskell? You are talking about
theory above. Einstein once said "In theory, there's no difference between
theory and practice". In practice, none of that stuff affects your work
much. Writing Haskell code is very much like writing statically typed
Scheme, complete with tail-call optimization and lazy evaluation. And there
is a large set of good libraries, both official and contributed that you
can draw upon and is a great source of its power (as is true of Python and
Tcl; Tcl is pretty awful as a programming language, but just fine given
Ousterhout's original design goals; but you can do amazing things with it
because of the amount of ready-made stuff in its environment that you can
draw upon).

I would also comment that there is an issue with the complexity of Rust
and the questionable quality of the documentation. I think the combination
results in a language that is inordinately difficult to learn. It may well
be, though, that the documentation isn't where it needs to be yet because
the language is simply too complex (and, from what I've read, has been a
moving target for a long time). I have learned Go in the last year and
written a few Go programs, one multi-threaded. Go and Rust are roughly the
same age, but learning and using Go was far less frustrating than was the
case with Rust, and the documentation was far superior. But Go is a less
complex language than Rust, an d perhaps its superior documentation was at
least in part due to the fact that it is easier to describe.

I don't know or use Go and have never read its docs, but I find Rust's
documentation to be better than any other I have used by a wide margin.

First of all, I have exchanged a number of emails with Steve Klabnik about
The Book and have opened an Issue, where I've contributed a number of
suggestions. It is not my intention to denigrate his work, because there
can be many causes for the aspects of the documentation that I consider
inadequate. He continues to work on it, clearly understanding that it's not
a finished product yet. Documenting this language is a big job and I'm not
at all surprised that the first version has flaws. Then we have the "Rust
Reference", which we are told is not a language spec and "tends to be out
of date" and which I have found to be not terribly helpful.

As for your statement above, I ask again, have you used Haskell (and used
the 2010 Language Report, or Paul Hudak's "Gentle Introduction" or the
library documents or the online "Learn You A Haskell ...")? Have you used
Scheme? The R5RS report is a model for language descriptions. C has been
graced by two books, K&R and Harbison and Steele that are both
fantastically good (and better than the language itself, in my opinion).
Python is very well documented, as is Tcl (Ousterhout's first book is a
fine work).

I completely disagree. Rust is asking you to deal with memory management
manually, just as C does. Languages that provide garbage collection do not.
That is one less major task a programmer must deal with, freeing up mental
bandwidth for more important things, such as good algorithms and good

I missed this paragraph because of markup issues. This simply isn't true.
While its accurate to say that in Rust you can determine when memory is
allocated and deallocated, unlike most garbage collected languages, it is
not at all like C in that it is an isomorphic question to thinking about
when bindings are created and when they go out of scope.

It is like C in the sense of the programmer being part of the memory
management system, unlike languages with GC support. What the programmer
has to do is quite different in the two languages and Rust provides the
significant advantage of insuring that what the programmer does is correct.
I thought I made that clear in an earlier post, but apparently not.

Your experience is interesting, because mine's been different. I've been using Rust for about 6 months and find it really productive.

It's probably the first language I've used though that I've felt guides you towards efficient programs without sacrificing design-time expressiveness. I haven't used either Haskell or Scheme though.

I feel like Rust allows me to write programs whose determinism is tied to business rules without sacrificing runtime performance. I'd like to find out whether that can translate into penny pinching on infrastructure and run my Rust programs on a box of Jatz.

1 Like

I know a bit about the Python docs, so let me say this: they are old. (I don't mean outdated, mostly.) Many parts like the introductory tutorial have had > 15 years to mature, and have seen lots of incremental changes over that period, because of users' reports and suggestions.

For the age of the Rust project, and the amount of change the language has undergone before 1.0, the quality of the docs is phenomenal. The reference could use some updates, yes, and a formal grammar would be nice, etc. But please, give the guys some time, both to work on it themselves, and for the reports to come in.

Books like Learn You a Haskell, on the other hand, are community contributions, and don't have to come from the core team. This is something that, again, takes time and a bit of momentum for the language. We already have quite a few works like the Rustonomicon that are in this style, and I'm sure that many more (especially introductory books) will be coming in the next years.


Isn't that ad hoc polymorphism?

According to John Mitchell's Concepts in Programming Languages,

The key difference between parametric polymorphism and overloading
(aka ad-hoc polymorphism) is that parameteric polymorphic functions use
one algorithm to operate on arguments of many different types, whereas
overloaded functions may use a different algorithm for each type of

And maybe too "high level" because you basically have to learn category theory before you can do some things. Not to disparage Haskell because I agree it is really great for its target use cases and demographics. That is if you want to learn what a Monad is just to do I/O and any imperative style programming. And no parenthesis grouping functional call arguments, so you need to memorize the definition site of the functions in order to group the function arguments in order to read the code. Haskell seems to be for a mathematical mind and it inverts the type system to coinduction thus populating every type with Bottom, i.e. the conjunction of all types. Whereas many programmers want to think more inductively (where Any is the Top disjunction of all types) and imperatively as they accustomed to coming from C, C++, Java, Python, Javascript, etc..

1 Like

Hmm, I think I really should look at Haskell someday. It seems to come up a lot in these language discussions.

Rust has a bottom type though. We spell it !. Anyway, all languages that don't have a compile-term termination checker have to contend with | on some level.

Someone confirmed it is ad hoc polymorphism, and my reply which is relevant in this thread also:

I am surprised afaics the ad hoc polymorphism is not mentioned any where in the documentation. And the Expression Problem nor the Wikipedia entry on Composition over Inheritance, are also ostensibly not mentioned in the documentation. For me, if considering Rust as potentially a better "high-level" language, the ad hoc polymorphism in a language which does not have Haskell's coinductive type system, seems to be unavailable in any other C/C++ derivative (potentially mainstream) language?

I'm suggesting the documentation maybe could be improved to proactively explain to incoming OOP (aka subclassing) converts, to make an argument for why they typically don't want to be using the anti-pattern of OOP virtual methods, and instead using late binding dispatch at the call site, instead of at the declaration site. In other words, ad hoc polymorphism un-conflates (makes) interface from (orthogonal to) data type, and the binding of the interface to a data type occurs at the function call site, not at the data type, interface, nor function declaration sites. Of course there are some tradeoffs, but the inflexibility of premature binding is removed.

For a mainstream high-level language, I am starting to contemplate if I am wishing Rust's ad hoc polymorphism was available in a strongly type language that had no verbosity GC and didn't basically force on us by default the noisy and complex type system of modeling lifetimes and memory (which apparently even infects generics with the 'a syntax ... I haven't learned that yet though). The lifetimes and memory allocation feels too heavy (a PITA) for a language that most programmers would want to use most of the time. Sometimes you want that control, but always by default? And a mainstream language without first-class (i.e. not a library) async/await is becoming an anathema.

I think Haskell is an excellent choice for "high-level" situations. It
takes some doing to learn it

And maybe too "high level" because you basically have to learn category
theory before you can do some things.

How much Haskell have you written? My guess is "not much", or maybe "none".
I've written a LOT of it and what you say above is absolutely not true. You
simply learn the 'do' construct and why and when you need it, which is not

Not to disparage Haskell because I agree it is really great for its target
use cases and demographics. That is if you want to learn what a Monad is
just to do I/O and any imperative style programming.

Which takes a short period of time with Paul Hudak's "Gentle" introduction,
or "Learn You a Haskell". It's comparable to learning any programming
language. The only thing that makes learning Haskell more of a challenge
than, say, Python, is that Haskell is different in some important ways
(functional language, lazy evaluation) from more mainstream languages, and
so requires a somewhat different mindset. Not unlike ownership, borrows,
and lifetimes, but far easier, in my experience (though I will concede that
the difficulty I encountered with Rust, and I am far from alone, was, at
least in part, due to the state of the documentation at the time I
attempted to use the language; hopefully, that is a temporary situation).

And no parenthesis grouping functional call arguments, so you need to
memorize the definition site of the functions in order to group the
function arguments in order to read the code.

Again, untrue. It is always clear from the code what function-call
arguments are, otherwise it wouldn't compile!

Haskell seems to be for a mathematical mind and it inverts the type system
to coinduction thus populating every type with Bottom, i.e. the
conjunction of all types
Whereas many programmers want to think more inductively (where Any is the
Top disjunction of all types) and imperatively as they accustomed to
coming from C, C++, Java, Python, Javascript, etc.

You are again demonstrating what I am guessing is a lack of real experience
with Haskell, and therefore you don't know the difference between theory
and practice. I have written a lot of code in all the languages you mention
(and many you haven't -- I've been doing this for a very long time) and
none of the theory you spout above was remotely a consideration. It's for
academicians to debate and write papers about, but it doesn't come up in
the real world.

The fact is that today's Haskell is a highly developed programming
environment with a large library of useful tools and an amazing compiler
(GHC). The language is very expressive -- used correctly, Haskell programs
are very concise. And much debugging is moved to compile-time (avoiding
some less efficient run-time debugging), because of the strong typing and
excellent compiler diagnostics. It's a great way to quickly develop correct
code that performs well.

My interest in Rust was the possibility that it could serve in places where
I presently use C and that it shares some of the characteristics that I
value in Haskell. I hope to come back to it at some point when the
documentation is improved and find that it is useful to me. But I do not
think that Rust and Haskell are comparable languages, any more than C and
Python are comparable. Their areas of appropriate applicability are quite
disjoint. This is because there is an inevitable trade-off between
ease-of-use and performance. Because of the power of today's hardware, we
can do a lot of useful things even with interpreted languages, and with
compiled languages that deliver less-than-maximal performance, like Haskell
and Scheme. But when maximum performance really is a top priority (as
opposed to something in the imagination of a programmer doing premature
optimization), then languages like C and Rust have a role. But you pay a
coding-time price for their use and I don't think that will ever go away.

I am a professional C++ programmer, and I now use Rust not only as a C++ replacement in my side projects, but also in places that originally belong to scripting languages. While it is good to see Rust attracts a wider audience, I hope Rust will not make compromises to the current design philosophy in an attempt to attract and retain wider audience when facing trade-offs.

Rust is a systems programming language that puts safety at the first place. Among others it has the same zero-cost abstraction philosophy as C++, that is, as Bjarne Stroustrup puts, you don't pay for what you don't use.
It favors explicitness and correctness over the ability to write one-off code fast when these two are in conflicts, because being able to write correct code that is easy to read and maintain is more important than being able to type less characters for the type of programs Rust is aiming for.
Rust also makes strong recommendation on what is the right way to write programs, but does not dictate it. This is reflected in that Rust design explicitly makes it easy to write correct code, hard but not impossible to write potentially bad code.
These made choices in design in my opinion make Rust to be the best in systems programming. And I hope it will always be the best systems programming language, but not just a good language for everything.


Please I wrote a respectful post. No need to introduce dubious (and frankly condescending and bit arrogant tone) ad hominen assumptions. I've coded in many languages also over a roughly 30 year career. Haskell does invert the type system from inductive to coinductive and this introduces advantages and disadvantages. I won't go into all the details further here (a link was already provided to Robert Harper's blog post). I was merely trying to point out that reasoning about Haskell down to the "nuts and bolts" requires understanding many other layers such as Monads (category theory), memoization, lazy evaluation, coinductive types, etc.. It is a high-level in the sense that is a very powerful semantics built with powerful abstractions.

That was my point also.

Not only that. There is also a tradeoff in the power of the abstractions we choose and whether they are comprehensible to other people that need to read and work on the code. Also whether those abstractions are the ideal fit for the task and use case.

This formerly absolutely valid point is weakened or mitigated as mobile eats the desktop and battery life also depends on performance. Nevertheless the cost of the programming effort remains a variable in the equation, so there is a balance which varies for different use cases.

This is why I argue the low-level languages should be used after profiling the code and knowing where the performance bottlenecks are. Because for example modeling memory allocation (lifetimes and scopes) in the type system, infects the type system every where (e.g. even 'a in generics apparently) so the code base presumably gets more unmanageable over time because there is an exponential explosion of invariants. Because unsafe can propagate from any untyped code (the entropy is unbounded due to be a Turing-complete machine, i.e. not 100% dependently typed). Thus the notion of a completely typed program is a foolish goal. It is about tradeoffs and fencing off areas of maximum concern. The human mind still needs to be involved.

So please don't try to hold Haskell up as the a high-level solution with the only tradeoff being performance. The reality isn't that simple.

Edit: for example (just so you don't think I am BSing), Haskell can't offer first class disjunctions without forsaking its global inference that provides some of Haskell's elegance. Without first-class disjunctions (which Rust also doesn't have), then composition via ad hoc polymorphism is somewhat crippled. I suspect some of the need for higher-kind types could be avoided if Rust had first-class unions. Which Rust could I presume implement, because it doesn't have global inference.


Yes but my point is that Bottom in Rust is not at the top of all types, i.e. Bottom (the conjunction of all types) is at the bottom of the type hierachy in Rust; whereas, in Haskell it is at the top of the hierarchy. At the Top of the hierarchy in Rust is afaik Any which is the disjunction of all types, but this at the bottom of the type hierarchy in Haskell. This inversion of the type hierarchy in Haskell to coinductive has advantages and disadvantages.

Moderator note: All, please keep the conversation constructive.


My subjective opinion follows. Full respect for the person I am replying to is intended.

One can also avoid segfaults by using GC every where and then avoid all that tsuris of typing the memory lifetimes and scopes. So what you are really implicitly claiming is you need performance every where. But do you really? And what are you forsaking in productivity by not using a language with less tsuris?

I'll posit that programmers need productivity more than performance 80+% of the time. The amount of code that needs to be optimized in an application for performance is usually the smaller portion.

If we shift the focus to ad hoc polymorphism, then I'll agree with you that we need this 80+% of the time so we can express the semantics of our program, which can increase our productivity through:

  1. S-modularity (separation-of-concerns, SOLID principles)
  2. Less bugs
  3. Self-documenting code, code that is easier for others to learn and easier for the author to revisit
  4. Greater decentralized collaboration because of #1 - #3, which fits well with the DVCS open source, virtual work model.

This is why I am prioritizing the ad hoc typing over the typing of memory deallocation invariants. I'd still like to have the latter, but I certainly would not have made it the default at the cost of making the GC case fugly noisy. Why make the default the more noisy and less often used caset, which then makes very verbose the more often used case that could have been noiseless.

All portions of systems programming code is fully optimized? So Rust is for programming operating systems, web browsers, and FinTech (banks)? I bet even those can benefit from high-level code that is prioritized on expressiveness, concise readability, and other measures of productivity and human maintainability, other than memory allocation performance optimization every where.

It is not zero-cost. There is a cost in terms of human factors and also potentially as I wrote:

I am very skeptical of the claim that memory lifetime and scopes typing everywhere is the right and correct way to program.

I am very skeptical of the claim that memory lifetime and scopes typing everywhere is the right and correct way to program.

Given that the other options are manual memory management (without compiler support) and using a GC (and still suffering from data races/iterator invalidation). I don't share your scepticism.


To the extent the bold portion is predominant and we ignore the italized "everywhere", then I agree with you. That is why I wrote "skeptical" and not "certain" nor "confidenced".

However, I am roughly certain that it is trading one problem for another problem, in that the entropy of programming is unbounded and thus we can't type the universe around us (the I/O, the openness to other modules, plugins, FFI, DSLs, etc). So we'll end up with data races and other errors with Rust also. And if we try to type the memory lifetimes everywhere, we will end up with much more brittle type system. If we rather limit its exposure perhaps we can keep the explosion of invariants sane.

So again my skepticism is about wanting to use Rust's compile-time memory lifetimes model everywhere, not about the value of using memory lifetime and scopes typing somewhere. I am talking about balancing priorities, not about absolutes.

My subjective opinion of course.

P.S. I am the guy that wrote in 2011 that Rust was wasting their time implementing Typestate because it proposed to have two orthogonal means of enforcing invariants thus would create corner cases of a non-unified typing system. Typestate was removed from Rust.

Edit: my lesson learned from Scala was all that complex typing and corner cases which has caused companies to abandon Scala had also ignored one basic fundamental flaw, which that subclassing is an anti-pattern. Complexity kills. Often a high-level perspective can route around it. K.I.S.S. principle.

Edit#2: data races can also be attacked with co-routines and/or async/await with a single-threaded model, which adds functionality not really tsuris.

Edit#3: iterator invalidation can also be attacked with functional programming and Functor map.

1 Like

You've said elsewhere that you haven't done that much Rust yet; experienced Rust users don't report a ton of difficulty with this. There's a learning period, and then it's not hard anymore.

Also, you end up using lifetimes a lot less often than you'd think. For example, over the weekend, I built a little text adventure for Ludum Dare. It's about 350 LOC, and loads a game from a TOML file and plays it. Not a ton of features, but still: I wrote exactly zero 'as in the entire thing. And I only have one place where I did some cloning that I may not have to. (source: )


I appreciate your thoughtful reply and patience with my lack of experience with Rust.

I'd also like to read about the use cases which I can only do best with Rust's memory ownership model, which can't also be solved with:

  1. GC to remove segfaults
  2. Single-threaded asynchronous programming to remove race conditions
  3. Functional programming such as Functor map to eliminate iterators (and their invalidation)

Obviously where we need more performance, #1 is not acceptable. Like to see other cases enumerated here or in some document. Not a demand, just stating what I'd like to learn.

Any resource which isn't memory. Files, sockets, anything like that. GC only really helps with memory, but not anything else.