Blog Post: Lifetime Parameters in Rust

Thanks for clarifying this. You seemed to imply that it is an universally agreed fact that GC is superior to RC in all aspects. I just wanted to point out that I have a different opinion, thus excluding myself from the "we".

No, I don't have recent benchmarks. My point is not primarily about benchmarks but about different preconditions now and then. You just cannot apply the same arguments to early .NET and todays Rust.

But I also think that the numbers could be different today. Optimizers are improving constantly and I think RC has better optimization potential (elision of redundant inc/dec) than GC. But this is just speculation.

That's a good summary. And I suspect that GC performs worse in that case. At least in the border case where you don't use RC/GC at all, there's still overhead for GC but not for RC. I don't know where the point of equal overhead is, though...

It's not always possible:

  • References imply no ownership
  • Box / value types implies unique ownership
  • Rc for shared single-threaded ownership
  • Arc for shared multi-threaded ownership

IME, the usage frequency of those is descending in the order that I listed them. In only use Rc/Arc where true shared ownership is semantically necessary, which is actually very seldom the case.

Just to show how different the preconditions are. Rust is not OO, it uses value types and moving extensively. All those points lead to lower overhead for RC, but not (necessarily) for GC.

Sure, that case exists. But I don't think this is needed very often (in contrary to purely functional languages), and if you use it, it is always explicit by using Arc or something similar.

That's true and I agree that using a GC is often convenient. But in the cases where you reach the limits it will produce more work than not using one.

With cloud computing, using more resources automatically means more expensive.

The list of downsides of GC is not a theory, it's what caused me headaches in real world projects:

Onfortunately, patterns like IDisposable break down completely with true shared ownership. I one project, I had objects that represent temporary folders and files. Ownership was truly shared so the only mechanism there is to manage those is GC. Finalizing the file objects is dependent on finalization of the folders. I had to resort to nasty hacks like "resurrecting" folder objects during finalization when their contained file objects were not finalized yet.

Yes, I'm sure. I did extensive heap profiling and the memory consumption was due to dead objects. The system worked well under low load but broke down under high load. I also thought that generational GC could cope well with many short lived objects but apparently I was wrong.

The only way to make it work was avoiding allocations where ever possible. There are many hidden allocations in C#. For example, I had to use SortedList instead of Dictionary, because Dictionary uses a node object for every stored object. I could not even use the default comparison function for SortedList because its arguments are implicitly boxed. I could not use C#-events because EventArgs is a class and thus boxed. I had to "expand" parameter objects to pass them individually. Use struct instead of class wherever possible.

In the end I brought the runtime from several minutes (using all of my memory and still thrashing) down to about 1 second and almost not using any memory. Just by avoiding temporary allocations at all cost. But the result is not pretty.

This was a project, where the low level performance critical work was done in C++ and the high-level coordinative stuff in C#. Still, I had to dive deep into profiling even for the high-level part to make it scale well.