Reminder, folks, that if you're replying back and forth many times in a few hours, it might be a good time to take a break.
Arguments about definitions rarely produce useful results.
Reminder, folks, that if you're replying back and forth many times in a few hours, it might be a good time to take a break.
Arguments about definitions rarely produce useful results.
Amending my statement from “Rust will likely be successful in those areas proportionally to how suitable it is for that definition of systems programming.” to “ Rust will likely be successful in those areas proportionally to how suitable it is perceived to be for that definition of systems programming.” would make it more accurate.
I fully agree. If performance wouldn't matter, everybody would be writing in Java and just mark every method as syncronized. You would never see any race conditions or pointer errors.
Designing easy to use and safe languages is easy if you don't mind runtime performance (latency requirements, memory requirements and computing requirements). Microkernel OS (such as Hurd) sounds fine in theory but once you try to implement it for real, the latency will kill your project.
Rust is designed for similar use case as C and C++: the absolute maximum performance the hardware can support. In addition to that, Rust can guarantee memory safety and avoid data races, too. To make it possible, it must limit the language compared to C/C++ and to require you to learn to live with the borrow checker.
As far as I can tell any new language that leaned on garbage collection instead of lifetimes is pretty pointless. We have dozens of them already.
The whole anti-aliasing/lifetime thing is the unique selling point of Rust. A entirely new concept that has not existed in any mainstream language before.
What are the other good points of Rust are there that do not exist elsewhere already?
I'm guessing they don't want to have to give up garbage collection or go pure-functional to get a standard library and ecosystem that grew up around the availability of monadic error handling, sum types, and no
(As opposed to things like Java, C#, and TypeScript which are trying to retrofit those features.)
That's certainly what I'd go for if I were in a situation where I wanted garbage collection. No exception-based error handling, no integer result codes where a proper
Result or other data-bearing enum should be, and no
None. ...no exceptions.
That sentence has me wondering ....
Are there people who actually want garbage collection? I would have thought that what they want is a way to write programs and have the machine do what they ask.
I wonder because my first introduction to programming was with BASIC. Presumably that had garbage collection. Little did I know. I just wrote code, and if I got it right the machine did what I want.
Next up was Algol. Which I assume does not have garbage collection. Little did I know. I just wrote code, and if I got it right the machine did what I want. With the bonus that it was such faster.
Then came working life. Programming in assembler. Certainly no garbage collection there. Little did I know. I just wrote code, and if I got it right the machine did what I want. With the bonus that it was after still.
It was not until a decade or more later I started to hear about this magic thing called "garbage collection". I think that was with the arrival of Java. Given the world I worked in I never did see the point of that.
Anyway, I now argue that any language that uses garbage collection is necessarily a lot less efficient. Thus burning more energy. Thus putting more CO2 into the atmosphere. Thus causing more global warming.
Nobody wants that today. Do they?
It depends on how you value the pros and cons you're trading off.
I have yet to see a situation where I want garbage collection in my own projects, but I don't program complex graph-based algorithms, I have a rickety old Athlon II X2 270, and I still grumble that I had to upgrade my RAM to 32GiB because of those damn web browsers when I still remember buying 16GiB so I could run three or four VirtualBox VMs for testing in parallel with the stuff I run now.
Not necessarily. At heavy allocation rates of short-lived objects, GC freeing being O(live) instead of the O(dead) of
freeing can overcome tracing overhead. And similarly, when you're usually not hitting memory pressure but have heavy sharing, avoiding the reference count updates can make GC overall cheaper.
"Not necessarily" may well be true. There may be a GC based language that works really well that I have not heard of.
However, so far I have not heard of such a thing and cannot find evidence that such a thing exists.
See The Benchmark Game: Box plot charts | Computer Language Benchmarks Game
I would be interested to see evidence to the contrary.
Given the huge number of replies in this thread, I haven't read through the thread. But after seeing this thread popping up several times, I thought I'd read into the originally linked article:
I think the core advantage of Rust is missed in this post:
Many languages will not allow you to create an API where it's clear whether a passed value is only read, will be mutated, will be rendered unusable, etc. In Rust this is very clear. We can pass values:
This doesn't only allow us to get rid of garbage collection, it also makes things more secure.
I should add that Haskell has similarly strict semantics because it is purely functional. But in case of Haskell, this comes with the price of garbage collection at runtime, if I'm not mistaken.
About Async-Rust: I feel unsure yet. I think it's justifiable that not everything is async because it introduces overhead. I also think it can make sense to have different async runtimes. But I know too little about the implications yet to give a good comment on that. So far, my experiences with async Rust (using Tokio) were quite good.
Unpin can be confusing at times, but I do understand why they exist. The only thing that's really bugging me is problems with async Traits, and the issues of futures being
Sync and how to declare that (or not to declare that). I hope a good solution will be found soon.
Regarding "The 'friendly' community": I have had similar thoughts. Right now the Rust community is awesome. But once Rust takes a more mainstream position (which I believe will happen)… who knows if forums like this will be the same then. But that's not a reason to not use Rust. This forum helps a lot to overcome the missing bits and pieces in the documentation and specification, and I'm sure there will be more resources on Rust in the future. I'll enjoy this time of getting to know Rust while it it still grows up and has a higher ratio of enthusiastic and intrinsically-motivated people using it.
Given that you only have to do the O(n) tracing after every Θ(n) allocations, it amortizes well. And this gives you only 2x memory use overhead.
Thus, garbage collection is actually in theory asymptotically faster or better than standard allocators. You get amortized constant time allocation per byte, and constant factor memory overhead. This can't be done with any scheme that doesn't move objects around in memory.
In reply to no particular one of the recent answers, but on the topic of the sentiment that one could "replace lifetimes with garbage collection" to create a language that takes some of the features from Rust, but not all,
note that AFAIK most of the unique feature of Rust are also linked to lifetimes. I agree that Rust also has good error handling and algebraic data types, in a way that's not as first-class or fully-embraced in other (not purely functional) languages; but if all that's missing in other existing languages is full support for such things in the standard library / ecosystem, then growing a good ecosystem including an alternative "standard library" in an existing language might still be more straightforward than creating a new language entirely.
Regarding my main point, to name some unique features of Rust: The ownership + borrowing model is clearly linked to lifetimes. Lifetimes are annotations to "help" the borrow checker, nothing more. But they're used for much more than just memory management, so the idea to "replace lifetimes with garbage collection" is flawed to begin with: Ultimately ownership+borrowing is about resource management. Any kind of resource; can be memory, but it can also be anything else.. a file, an open connection, or unique access to a shared data structure. This last point connects with another unique feature of Rust, the remarkably great support for (fearless) concurrency. Rust's story here, evolving around the traits
Sync is, again, strongly related to ownership+borrowing, precisely because “unique access to a (shared) data structure” is a resource that you can own (or uniquely borrow). The interaction between
Sync is characterized by distinguishing between owned (or uniquely borrowed) vs. (shared) borrowed data, as evidenced by the
T: Sync <-> &T: Send relation, and the interaction between
Sync is what powers the compiler's ability to understand the most fundamental synchronization primitives like
Mutex, with its
T: Send <-> Mutex<T>: Sync + Send relation.
Sure, in a Rust (alternative) without lifetimes, but with a (shared)
Gc<T> reference type, you could still technically express some of these relations, e.g.
T: Sync <-> Gc<T>: Send (or perhaps, if this
Arc's capabilities like
try_unwrap, then it'd be
T: Sync + Send <-> Gc<T>: Send), but the value of doing so becomes much less useful if you never really own any data anymore. Of
&T in APIs, then every kind of data of type
T that's supposed to ever be shared (even locally within a thread) needs to be converted into
Gc<T>, an almost irreversible process. This need for
Gc<T> everywhere would be infections, without the convenience that Rust's static analysis features with lifetimes offer, I can hardly imagine how it'd be possible to avoid the need to, realistically, need to replace your whole code with
Gc<T> anywhere; and all the fields that ever need to be mutated would need to become some
Cell<Gc<T>>-like (but thread-safe; basically an
AtomicGc<T>) type. Once that's happened, you would've however turned most or all mutability into "interior mutability", well hidden from static analysis, eliminating any true ownership, and crucially no longer allowing the compiler to prevent any race conditions / data corruption anymore. Sure, there aren't any data races, everything is still memory safe, and you can still use
Mutex<T> if you know you need the synchronization, but most types would be
Sync by default, so nobody ever forces you to use a
Mutex, unless API designers explicitly thought of thread-safety in their API design and explicitly opt out of a
Sync implementation, even though their structs do use
I feel like I'm describing Java by now (at least in terms of thread-safety). (The opt-out of
Sync is like
synchronized methods in an API, implying that callers will (implicitly) use something that's essentially a
Mutex<T>-style unique access to values.)
To be clear, I'm not arguing against garbage collection here, I'm arguing in favor of lifetimes for applications besides memory management, and against the idea that garbage collection can "replace lifetimes".
It is not entirely unusual to make such a claim! I've been on teams whose definition of "critical" included a GUI to manage trivia questions for television shows. In the grand scheme, no lives are at stake if this software fails. I just believe that in the scope of this one weird product, it was critical for the success of the entire product that the production team could use the GUI to create their trivia questions.
Under this definition, it's entirely reasonable that even the most hilariously useless software could still benefit from Rust. Or at least any statically typed language that eliminates most runtime errors that plague these kinds of user-facing applications every day. At some point all software "has to work" or it isn't worth writing in the first place.
Anecdotally, I had a game crash on me yesterday with this error message and I was both annoyed and entertained by it. If the game didn't have a garbage collector, it wouldn't have crashed in this particular way. Perhaps it's just a poorly written game (it is) and it would have crashed in some other way instead, but we'll never know.
It's a condensed paraphrase. The original definition, as written in Systems Programming Languages (Bergeron et al. 1972), was:
A system program is an integrated set of subprograms, together forming a whole greater than the sum of its parts, and exceeding some threshold of size and/or complexity. Typical examples are systems for multiprogramming, translating, simulating, managing information, and time sharing. […] The following is a partial set of properties, some of which are found in non-systems, not all of which need be present in a given system.
- The problem to be solved is of a broad nature consisting of many, and usually quite varied, sub-problems.
- The system program is likely to be used to support other software and applications programs, but may also be a complete applications package itself.
- It is designed for continued “production” use rather than a one-shot solution to a single applications problem.
- It is likely to be continuously evolving in the number and types of features it supports.
- A system program requires a certain discipline or structure, both within and between modules (i.e. , “communication”) , and is usually designed and implemented by more than one person.
That's why Java, Go, and Rust are all designed with the intent to be systems languages. They're all designed with an eye toward managing the complexity which emerges from such use-cases at the expense of adding more boilerplate to the kind of quick one-offs and experiments you'd do in something like Bourne Shell, Perl, or Python.
I fully agree with this, and to expand on that regard: there are a bunch of functional languages out there which already feature many of these advantages, and, before Rust got that successful, they were the flagship languages for "safe programming". And, truth be told, the situation hasn't changed: if I saw some codebase using one of the main functional languages out there, I'd consider the app/library supported by it to be more likely to be a robust one than if it were written, by similar programmers and with a similar person-hours investment, in another more imperative or object-oriented ("traditional") language.
Rust hasn't changed the equation there: (garbage-collected) functional languages continue to be great (I wish they were more widespread), and should not be deemed superseded by Rust since Rust does still lack some of the better niceties of garbage-collected functional languages: fully nested pattern-matching (no : "oh, I need to add a nested
match because there is an
Arc in here"), HKTs and dependent typing, etc.
Rust is great in that it has a "static / compile-time" garbage-collector, in the form of ownership and borrows, which by virtue of happening at compile-time, allow it to be potentially more performant than a language with a runtime garbage collector, but there are a bunch of apps out there which are unaffected by the small performance hindrance of a garbage collector (even if other apps can be; an important factor seems to be when trying to upper-bound the latency / reactivity of said app/library functionality. A garbage collector can lead to "lag spikes" which are bad in some environments (and harmless in others)).
Now, back to what @steffahn mentioned, one very interesting thing about Rust is that its ownerhip-and-borrows model grew way beyond the point of being "just" that "static garbage collector": it invented unique references, and made them the primary construct for mutable references (at the cost of "white lies" in the standard documentation and official books, but that's another topic), and from there, it did feature a whole new level of resource management (as Steffahn put it ), with even a bunch of multi-threading-aware language idioms.
I personally find that to be bonkers, and to ultimately be the long-term advantage of Rust, which has thus outgrown its primary "static garbage collector" utility; at that point even the functional languages out there, despite state-of-the-art ADTs and runtime garbage collectors, do not seem to me (although I may be wrong) that good at handling some of these resource management aspects, or thread-safety aspects, without hindering the parts of the code that may not care about either (e.g., in Rust, you can, within a multi-threaded-sensitive codebase, start using some
Cells within certain single-threaded parts, with the reassuring knowledge that the compiler will tell you if you happen to mix the two).
unsynchronized shared mutation, pervasive in the most traditional languages ↩︎
On that aspect of the trade-offs between Rust and other designs, I'm less bothered by the lag spikes and more by the RAM requirements of leaving room for floating garbage.
Actually I'm sure Rust have changed the situation radically, but in a C-style way. That is: decades from now people would understand that easily, but for many years yet Rust would be something on the fringe of IT world.
C language was invented 50 years ago, in 1972. But DOS was written in assembler and Microsoft Pascal, initially in 1981, MacOS was written in Pascal in 1984, and RiscOS was written in assembler in 1987!
It took almost two decades for the industry to start that switch to C (and by that time C got it's own half-descendant — before it even got it's first standard document).
And then, of course, C# and Java arrived — with tons of hype and promises to make code safer. On top of decades-long hype of pushing GC as the one and only way to safety.
I wouldn't ever understand why industry was so hell-bent on GC, my guess would be that success of AS/400 made designers hopeful that it may be replicated in other areas, but somehow… triumph of managed code never, actually, happened.
No one else was able to create another long-living only-managed-code platform. One of the longest-living was Blackberry OS, I think — it lasted for a decade.
Rust showed the industry that no, you don't need GC for the safety — and this changes everything: suddenly GC is no longer something you have to have but something you may use in certain areas when you want to. Suddenly it's not central feature which underpins all the executions guarantess, but a side-show.
As for “other functional languages”… sure, they contain many interesting properties which are not present in Rust and some which can not even be easily added to Rust… but AFAICS none of these advanced features actually require GC.
I think the true impact of Rust would be seen after appearance of other, higher-level, languages built on top of ownership and borrow model. I don't think GC would ever be added to them, but HKTs and other such things can. And because they wouldn't be GC-based they could easily interoperate with Rust. But I don't expect them to appear any time soon. Rust have to become more popular first before people would develop something like that.
P.S. If you want to see how far from GC-based functional languages things like dependent typing can go… look on Google Wuffs. It's not a general-purpose language and doesn't include many fancy things… yet it solves (and beautifully solves) certain real-world problems.
21 posts were split to a new topic: Negative views on Rust: language-based operating systems
<meta> Wow, this spawns threads like it's rayon </meta>
This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.