Negative views on Rust ( not mine! )

Cloud programming. It's almost the exact polar opposite of systems programming.

And they are built on top of hardware which is not reliable and fail constantly. And they use programs which may crash at any time. And all that is tunerd to redo the work if failure happen. There are multiple levels of mitigations but you don't try to avoid failures, you mitigate them. Simily because avoiding failures is impossible: hardware itself fails constantly.

Very-very different world from the normal system programming where you try to avoid problems and not to develop ways to paper over crashes after they have already happened.

It's possible to write software for the cloud in Rust (and most cloud designs use linux kernel which is extremely robust and not some fancy kernels written in C#). But all that is optional. When you have said “every node can stop responding at any time simple because PSU have decided to die” you have entered entirely different realm where the need for the software to work reliably is much less important: sure, something your code (written in Go or Java) may start to consume memory like a crazy and then die… it's not a big deal if it doesn't do that too often: these same mitigation measures which are designed to fight dead PSUs would work for OOM, too. Deadlocks? No problem: just ensure that process is killed when they happen. And so on.

How? Their managers measure their efficiency on the basis of time they need to close issues in Jira. The quicker they can close them the more bonus they may earn.

Rust makes it rather difficult and the fact that there may be less Jira issues if you use Rust is rather hard to prove (even if it's true).

1 Like

That is exactly what I am disagreeing with.

I find cloud and embedded systems to have very similar requirements. For both of them:

  1. They are required to run forever.

  2. They are required to run correctly.

  3. The run on unreliable hardware. All hardware is.

  4. When you talk to people concerned with keeping customers on web pages there are even strict real-time limits involved.

I spent many years working on safety critical avionics system. Guess what:

  1. The hardware was assumed to be unreliable. Hence the use of multiple redundant systems and multiple communication channels between them.

  2. Software was also assumed to fail at times. Hence the use of hardware watch dogs.

To my mind the 4 computers and 16 processors of the multiple-redundant system of the Boeing 777 Primary Flight Computers (The ones that move all the control surfaces and react to pilot and auto pilot input) don't look much different from the cloud services built and run by our little company or the huge systems of the likes of Google.

A PFC processor, or power supply or connectivity can fail at any time. The aircraft has to continue flying. Same like typical cloud services.

I will happily call all of that "systems programming". The requirements and the solutions have a lot in common. Including choice of programming language.

"Cloud" is not all about some nerd knocking up a web page in PHP to run on a single instance on Digital Ocean or whatever and not being much worried if it goes offline or gets hacked occasionally. There is much more going on in Cloud systems.

Nope. Google doesn't even measure maximum latency. Only percentiles. 90% percentile, 95% percentile, 99% percentile…

Well… this may sound similar to what happens in cloud. And if true makes that particular subset of embedded software similar to cloud development.

But the majority of the hardware and software is total opposite: hardware is considered infallible and software must ensure it's infallible, too. One example: recently systemd got new option HandlePowerKeyLongPress, because, you know, arranging hardware support for that “4 sec = reboot” behavior is too costly for phone makers, TV makers, smart lamp makers. They want to remove that thing (and they removed the ability to disconnect power and reset button long ago so if kernel will misbehave there are literally no ways to reboot this thing… you would need to visit service center).

Does it look even remotely similar to cloud to you?

I got your point. Maybe. But that one is very much an exception. And Boeing spends tremendous sums to eliminate it and make it like all other embedded systems: cheap == right.

What I gather from your definition of "systems programming" is that:

  1. The software runs on a single core processor. Say a micro-controller.
  2. Said hardware is 100% reliable.
  3. The software has to be 100% correct and reliable.
  4. There are hard real-time requirements. Likely down in the milliseconds/microseconds.
  5. You have total control of all the software running on the machine.

I have to say that while that is one end of the spectrum and there are thousands of embedded systems engineers working on such things, I don't think that is what most people think "system programming" is.

Seems we are arguing about a matter of degree. The degree to which software has these properties and requirements or not.

I don't think it worth arguing back and forth any further about it.

Reminder, folks, that if you're replying back and forth many times in a few hours, it might be a good time to take a break.

Arguments about definitions rarely produce useful results.


Fair point.

Amending my statement from “Rust will likely be successful in those areas proportionally to how suitable it is for that definition of systems programming.” to “ Rust will likely be successful in those areas proportionally to how suitable it is perceived to be for that definition of systems programming.” would make it more accurate.

I fully agree. If performance wouldn't matter, everybody would be writing in Java and just mark every method as syncronized. You would never see any race conditions or pointer errors.

Designing easy to use and safe languages is easy if you don't mind runtime performance (latency requirements, memory requirements and computing requirements). Microkernel OS (such as Hurd) sounds fine in theory but once you try to implement it for real, the latency will kill your project.

Rust is designed for similar use case as C and C++: the absolute maximum performance the hardware can support. In addition to that, Rust can guarantee memory safety and avoid data races, too. To make it possible, it must limit the language compared to C/C++ and to require you to learn to live with the borrow checker.

As far as I can tell any new language that leaned on garbage collection instead of lifetimes is pretty pointless. We have dozens of them already.

The whole anti-aliasing/lifetime thing is the unique selling point of Rust. A entirely new concept that has not existed in any mainstream language before.

What are the other good points of Rust are there that do not exist elsewhere already?

I'm guessing they don't want to have to give up garbage collection or go pure-functional to get a standard library and ecosystem that grew up around the availability of monadic error handling, sum types, and no NULL.

(As opposed to things like Java, C#, and TypeScript which are trying to retrofit those features.)

That's certainly what I'd go for if I were in a situation where I wanted garbage collection. No exception-based error handling, no integer result codes where a proper Result or other data-bearing enum should be, and no null/nil/None. exceptions.

That sentence has me wondering ....

Are there people who actually want garbage collection? I would have thought that what they want is a way to write programs and have the machine do what they ask.

I wonder because my first introduction to programming was with BASIC. Presumably that had garbage collection. Little did I know. I just wrote code, and if I got it right the machine did what I want.

Next up was Algol. Which I assume does not have garbage collection. Little did I know. I just wrote code, and if I got it right the machine did what I want. With the bonus that it was such faster.

Then came working life. Programming in assembler. Certainly no garbage collection there. Little did I know. I just wrote code, and if I got it right the machine did what I want. With the bonus that it was after still.

It was not until a decade or more later I started to hear about this magic thing called "garbage collection". I think that was with the arrival of Java. Given the world I worked in I never did see the point of that.

Anyway, I now argue that any language that uses garbage collection is necessarily a lot less efficient. Thus burning more energy. Thus putting more CO2 into the atmosphere. Thus causing more global warming.

Nobody wants that today. Do they? :slight_smile:

It depends on how you value the pros and cons you're trading off.

I have yet to see a situation where I want garbage collection in my own projects, but I don't program complex graph-based algorithms, I have a rickety old Athlon II X2 270, and I still grumble that I had to upgrade my RAM to 32GiB because of those damn web browsers when I still remember buying 16GiB so I could run three or four VirtualBox VMs for testing in parallel with the stuff I run now.

Not necessarily. At heavy allocation rates of short-lived objects, GC freeing being O(live) instead of the O(dead) of freeing can overcome tracing overhead. And similarly, when you're usually not hitting memory pressure but have heavy sharing, avoiding the reference count updates can make GC overall cheaper.

1 Like

"Not necessarily" may well be true. There may be a GC based language that works really well that I have not heard of.

However, so far I have not heard of such a thing and cannot find evidence that such a thing exists.

See The Benchmark Game: Box plot charts | Computer Language Benchmarks Game

I would be interested to see evidence to the contrary.

Given the huge number of replies in this thread, I haven't read through the thread. But after seeing this thread popping up several times, I thought I'd read into the originally linked article:

I think the core advantage of Rust is missed in this post:

  • Proper declaration of mutability and ownership

Many languages will not allow you to create an API where it's clear whether a passed value is only read, will be mutated, will be rendered unusable, etc. In Rust this is very clear. We can pass values:

  • by moving it
  • through shared reference
  • through mutable reference

This doesn't only allow us to get rid of garbage collection, it also makes things more secure.

I should add that Haskell has similarly strict semantics because it is purely functional. But in case of Haskell, this comes with the price of garbage collection at runtime, if I'm not mistaken.

About Async-Rust: I feel unsure yet. I think it's justifiable that not everything is async because it introduces overhead. I also think it can make sense to have different async runtimes. But I know too little about the implications yet to give a good comment on that. So far, my experiences with async Rust (using Tokio) were quite good. Pin/Unpin can be confusing at times, but I do understand why they exist. The only thing that's really bugging me is problems with async Traits, and the issues of futures being Send and/or Sync and how to declare that (or not to declare that). I hope a good solution will be found soon.

Regarding "The 'friendly' community": I have had similar thoughts. Right now the Rust community is awesome. But once Rust takes a more mainstream position (which I believe will happen)… who knows if forums like this will be the same then. But that's not a reason to not use Rust. This forum helps a lot to overcome the missing bits and pieces in the documentation and specification, and I'm sure there will be more resources on Rust in the future. I'll enjoy this time of getting to know Rust while it it still grows up and has a higher ratio of enthusiastic and intrinsically-motivated people using it. :blush:


Given that you only have to do the O(n) tracing after every Θ(n) allocations, it amortizes well. And this gives you only 2x memory use overhead.

Thus, garbage collection is actually in theory asymptotically faster or better than standard allocators. You get amortized constant time allocation per byte, and constant factor memory overhead. This can't be done with any scheme that doesn't move objects around in memory.

1 Like

In reply to no particular one of the recent answers, but on the topic of the sentiment that one could "replace lifetimes with garbage collection" to create a language that takes some of the features from Rust, but not all,

note that AFAIK most of the unique feature of Rust are also linked to lifetimes. I agree that Rust also has good error handling and algebraic data types, in a way that's not as first-class or fully-embraced in other (not purely functional) languages; but if all that's missing in other existing languages is full support for such things in the standard library / ecosystem, then growing a good ecosystem including an alternative "standard library" in an existing language might still be more straightforward than creating a new language entirely.

Regarding my main point, to name some unique features of Rust: The ownership + borrowing model is clearly linked to lifetimes. Lifetimes are annotations to "help" the borrow checker, nothing more. But they're used for much more than just memory management, so the idea to "replace lifetimes with garbage collection" is flawed to begin with: Ultimately ownership+borrowing is about resource management. Any kind of resource; can be memory, but it can also be anything else.. a file, an open connection, or unique access to a shared data structure. This last point connects with another unique feature of Rust, the remarkably great support for (fearless) concurrency. Rust's story here, evolving around the traits Send+Sync is, again, strongly related to ownership+borrowing, precisely because “unique access to a (shared) data structure” is a resource that you can own (or uniquely borrow). The interaction between Send and Sync is characterized by distinguishing between owned (or uniquely borrowed) vs. (shared) borrowed data, as evidenced by the T: Sync <-> &T: Send relation, and the interaction between Send and Sync is what powers the compiler's ability to understand the most fundamental synchronization primitives like Mutex, with its T: Send <-> Mutex<T>: Sync + Send relation.

Sure, in a Rust (alternative) without lifetimes, but with a (shared) Gc<T> reference type, you could still technically express some of these relations, e.g. T: Sync <-> Gc<T>: Send (or perhaps, if this Gc had Arc's capabilities like try_unwrap, then it'd be T: Sync + Send <-> Gc<T>: Send), but the value of doing so becomes much less useful if you never really own any data anymore. Of Gc<T> replaces &T in APIs, then every kind of data of type T that's supposed to ever be shared (even locally within a thread) needs to be converted into Gc<T>, an almost irreversible process. This need for Gc<T> everywhere would be infections, without the convenience that Rust's static analysis features with lifetimes offer, I can hardly imagine how it'd be possible to avoid the need to, realistically, need to replace your whole code with Gc<T> anywhere; and all the fields that ever need to be mutated would need to become some Cell<Gc<T>>-like (but thread-safe; basically an AtomicGc<T>) type. Once that's happened, you would've however turned most or all mutability into "interior mutability", well hidden from static analysis, eliminating any true ownership, and crucially no longer allowing the compiler to prevent any race conditions / data corruption anymore. Sure, there aren't any data races, everything is still memory safe, and you can still use Mutex<T> if you know you need the synchronization, but most types would be Sync by default, so nobody ever forces you to use a Mutex, unless API designers explicitly thought of thread-safety in their API design and explicitly opt out of a Sync implementation, even though their structs do use AtomicGc<T> internally.

I feel like I'm describing Java by now (at least in terms of thread-safety). (The opt-out of Sync is like synchronized methods in an API, implying that callers will (implicitly) use something that's essentially a Mutex<T>-style unique access to values.)

To be clear, I'm not arguing against garbage collection here, I'm arguing in favor of lifetimes for applications besides memory management, and against the idea that garbage collection can "replace lifetimes".


It is not entirely unusual to make such a claim! I've been on teams whose definition of "critical" included a GUI to manage trivia questions for television shows. In the grand scheme, no lives are at stake if this software fails. I just believe that in the scope of this one weird product, it was critical for the success of the entire product that the production team could use the GUI to create their trivia questions.

Under this definition, it's entirely reasonable that even the most hilariously useless software could still benefit from Rust. Or at least any statically typed language that eliminates most runtime errors that plague these kinds of user-facing applications every day. At some point all software "has to work" or it isn't worth writing in the first place.

Anecdotally, I had a game crash on me yesterday with this error message and I was both annoyed and entertained by it. If the game didn't have a garbage collector, it wouldn't have crashed in this particular way. Perhaps it's just a poorly written game (it is) and it would have crashed in some other way instead, but we'll never know.

1 Like

It's a condensed paraphrase. The original definition, as written in Systems Programming Languages (Bergeron et al. 1972), was:

A system program is an integrated set of subprograms, together forming a whole greater than the sum of its parts, and exceeding some threshold of size and/or complexity. Typical examples are systems for multiprogramming, translating, simulating, managing information, and time sharing. […] The following is a partial set of properties, some of which are found in non-systems, not all of which need be present in a given system.

  1. The problem to be solved is of a broad nature consisting of many, and usually quite varied, sub-problems.
  2. The system program is likely to be used to support other software and applications programs, but may also be a complete applications package itself.
  3. It is designed for continued “production” use rather than a one-shot solution to a single applications problem.
  4. It is likely to be continuously evolving in the number and types of features it supports.
  5. A system program requires a certain discipline or structure, both within and between modules (i.e. , “communication”) , and is usually designed and implemented by more than one person.

That's why Java, Go, and Rust are all designed with the intent to be systems languages. They're all designed with an eye toward managing the complexity which emerges from such use-cases at the expense of adding more boilerplate to the kind of quick one-offs and experiments you'd do in something like Bourne Shell, Perl, or Python.


I fully agree with this, and to expand on that regard: there are a bunch of functional languages out there which already feature many of these advantages, and, before Rust got that successful, they were the flagship languages for "safe programming". And, truth be told, the situation hasn't changed: if I saw some codebase using one of the main functional languages out there, I'd consider the app/library supported by it to be more likely to be a robust one than if it were written, by similar programmers and with a similar person-hours investment, in another more imperative or object-oriented ("traditional") language.

Rust hasn't changed the equation there: (garbage-collected) functional languages continue to be great (I wish they were more widespread), and should not be deemed superseded by Rust since Rust does still lack some of the better niceties of garbage-collected functional languages: fully nested pattern-matching (no : "oh, I need to add a nested match because there is an Arc in here"), HKTs and dependent typing, etc.

Rust is great in that it has a "static / compile-time" garbage-collector, in the form of ownership and borrows, which by virtue of happening at compile-time, allow it to be potentially more performant than a language with a runtime garbage collector, but there are a bunch of apps out there which are unaffected by the small performance hindrance of a garbage collector (even if other apps can be; an important factor seems to be when trying to upper-bound the latency / reactivity of said app/library functionality. A garbage collector can lead to "lag spikes" which are bad in some environments (and harmless in others)).

Now, back to what @steffahn mentioned, one very interesting thing about Rust is that its ownerhip-and-borrows model grew way beyond the point of being "just" that "static garbage collector": it invented unique references, and made them the primary construct for mutable references (at the cost of "white lies" in the standard documentation and official books, but that's another topic), and from there, it did feature a whole new level of resource management (as Steffahn put it :100:), with even a bunch of multi-threading-aware language idioms.

I personally find that to be bonkers, and to ultimately be the long-term advantage of Rust, which has thus outgrown its primary "static garbage collector" utility; at that point even the functional languages out there, despite state-of-the-art ADTs and runtime garbage collectors, do not seem to me (although I may be wrong) that good at handling some of these resource management aspects, or thread-safety aspects, without hindering the parts of the code that may not care about either (e.g., in Rust, you can, within a multi-threaded-sensitive codebase, start using some Cells[1] within certain single-threaded parts, with the reassuring knowledge that the compiler will tell you if you happen to mix the two).

  1. unsynchronized shared mutation, pervasive in the most traditional languages ↩︎


On that aspect of the trade-offs between Rust and other designs, I'm less bothered by the lag spikes and more by the RAM requirements of leaving room for floating garbage.