Haven't you argued that RC is inferior to Rust's lifetimes in your prior comment that I had replied to?
META: I want to make sure we are on the same wavelength in terms of not intending any animosity. I had to choose between ‘I’, ‘you’, ‘they’, or ‘we’; so I chose the word that seemed the most fit even though it isn't a perfect word. We can either blame the English language or just accept that I am (we are?) trying to achieve a goal which is to try to reach some clarity on comparisons of strategies for resource lifetimes, especially memory but also other resources. I suppose you are just clarifying and not intending animosity; and so I also want you to know I wasn't also trying to declare (without your participation what you think and that) ‘you’ wouldn't have other points to make in response. I make the post, so you can agree or disagree and provide additional points. Afaics, that is the nature of discussion. Thanks.
Do you know of more recent benchmarks to offer?
I cited that blog post w.r.t. to RC, not w.r.t. to Rust's compile-time checked lifetimes. Are you introducing Rust's advantages as a rebuttal to my point about RC's disadvantages? Is your point that RC is better in Rust?
The cited performance had even the single-threaded RC slower than GC.
Taken out-of-context their point does seem silly because GC also uses more memory to achieve the same performance as Rust's compile-time checked lifetimes. Also I read that Apple's use of RC on 64-bit Android puts the refcount in the upper unused 19 bits of the 64-bit pointers. But I think perhaps their point could be that RC and GC can consume more memory than explicit memory management, but according to their benchmarks RC isn't any faster than GC (and egregiously slower in the multi-threaded case) and RC doesn't free circular references but GC does. Afaics, their goal was to compare RC and GC.
META: Hey all of us have had those moments where we look at something and we think to ourself, “wtf was this guy not thinking, this doesn't make any sense”. I assume we both agree to not go there. We give each other more than one or two posts to understand what the other person is attempting to communicate. Thanks.
At that time, they had to choose between explicit memory management (EMM), RC, and GC, or some combination of them. EMM doesn't provide safety. RC doesn't free circular references. GC doesn't provide deterministic finalization.
It might be helpful for Rust to provide a table on their website and note this table is mitigated when asymptotic memory consumption is primarily due to the semantic "memory leaks" which can't be tracked by any of the following strategies and which are ostensibly inherent in many non-mission critical apps:
EMM...Rust...RC...GC
............x........x......x..... safety{}
x..........x................x..... frees circular
x..........x.........x............ deterministic finalization
x..........x.........x......x..... higher throughput even with higher memory consumption{}
x..........x....................... higher throughput with lowest memory consumption
x..........x.........x.....x..... lower latency pauses even with lower throughput
x..........x....................... lower latency pauses with highest throughput
x..........x.........x............ lower memory consumption at the same throughput{}
..............................x..... simpler (i.e. not as complex) to code {*}
{} Only true for RC if single-threaded, even in the case of RC in Rust.
{} RC not included bcz need to reason about circular references
{} not including checking mutability conflicts, which Rust can do at compile-time
Note the above table also doesn't deal with the issue of efficiency and complexity w.r.t. temporary objects. I am planning to discuss that separately.
Looking at the above chart, I can see why one of my first comments on this forum was that the main reason for choosing Rust's compile-time checked lifetimes is performance. The item I forgot was deterministic finalization, which afaics applies to resources other than memory (which wasn't my focus originally). Note I've read that there has been some experimentation with making GC's aware of the other resource types they are collecting and thus collect them when they are nearing exhaustion.
In your prior comment you wrote:
Okay I think I was missing your point before. Notice when I replied before, I didn't quote the last sentence because I didn't see how it related to your comment about RC. It seems your point is that given the ability to know that all references are either RC or statically checked, then RC doesn't have to be used everywhere, but it can still be used selectively.
Can you provide a compelling example where one would want to use RC and not Rust's static checking?
I am not understanding why you mentioned these points?
Maybe in the context of combined with Rust's compile-time checked resources that is so, if you can make the single-threaded performance as fast as GC and you never need to use RC in the multi-threaded scenario.
Aren't you missing the case where we have shared immutable references in the multi-threaded scenario?
Readers note I am registering in my mind that the asynchronously threaded (i.e. single-threaded, multiple code paths) is not a performance issue for RC.
For the Java example research paper I had cited, double the memory for 70% loss of throughput, triple the memory for 17% reduction in throughput, and 5X the memory for equivalent throughput. Note I am not sure if that is factoring in latency pauses, which to keep to a minimum may require a further erosion of throughput for the concurrent GC.
But how much does programmer time, maintenance, readability of code, and slower-to-market cost? Memory cost is declining exponentially, but we have very few methods of increasing programmer productivity. Thus I conclude the argument for prioritizing by default lower memory consumption is a losing one. Why did even Raytheon invest in the research to produce a guaranteed low latency GC algorithm ostensibly for mission critical applications.
The salient forward problem with memory allocation are the semantic "memory leaks" that none of the above strategies can fix.
EMM and Rust can also attain low latency without incurring lower throughput. But again my retort is CPU performance is increasingly exponentially, at least in the case of parallelism.
Sorry I can be sort of an unintentional pita that way, when one raises their hand to say, “but what happens when people move” or “but Bitcoin's double hashing may be a back door vulnerable to Boomerang attack”.
Perhaps you can make a more defensible point w.r.t. to resource types other than memory?
I think that might not be true with a generation GC if the language and design patterns are well optimized w.r.t. to temporary objects? This is the topic I am soon going to get into when I write down my thoughts on temporary objects and inversion-of-control and also present my ideas to replace iterators. I think we need to look at this holistically. Perhaps I have a mistake. We will get into it soon.
I am wondering if I won't end up concluding that we should only be using GC. And then finalization may be an orthogonal issue and I am contemplating if we should be thinking about finalization more high-level semantically. My conceptualization is not yet fully formed. Hopefully I can come to a complete understanding asap. My intuition may be incorrect.
I have enough experience in my life to learn that humans build tools because they don't like to dig canals with spoons. Masochism is not a virtue that most people want to emulate. “C is manly, but Python is for n00bs”. Hey I am very masochist when I ran a 10K in under 35 minutes, but that is irrelevant.
I am still trying to discern the use cases why I would need the tool of compile-time lifetimes. I suppose at this time, GC on popular VMs have horrible asymptotic latency (pauses) and mobile is currently RAM constricted. So those are current use cases in my realm. But I am also future thinking.
Shouldn't that be modeled high-level semantically and not by some low-level opaque mechanism.
Are you sure that is not due to the asymptotic memory consumption due to the semantic "memory leaks" I've mentioned?
Again I am not sure I want to use any memory allocation other than GC. I am still trying to make that determination.