Would a good optional GC (on a reference bases) be a good thing to Rust?

Link is already in this thread. But it's pretty vague and, most importantly, contradicts, to a large degree the earlier ideas of the same author.

Most importantly: it's not even clear how the whole thing would work (is vector of sql database connections a data type or a resource type?) and while it's clear something like that is possible to create it's quite unclear who may want to use that thingie or why.

2 Likes

I'm not a rust expert, but I have spent 20 years "fighting the GC" so now "fighting the borrow checker" is just another food type to me. There is no free lunch.

Gc gives me lock free compare-and-set tree datastructures. Gives me pointers to other threads' data (if only java supported truly read only data types - reflection can bypass private final). Gcs give me MEMORY COMPACTION (the single biggest thing I miss when I do c/c++/rust/python). They give me SoftReferences (caches tied to memory pressure).

But those benefits meant constant hacks to make the GC not impact the customer. How-to-use-95% of system RAM without crashing ; that was always a challenge (solved mostly by using memory mapped ByteBuffers - which meant pure reference counted code :frowning: ) how to not stall for 3 seconds randomly (that has gotten better over the decades, but by needing lots of constantly running background threads). How to have 100million constant/cache items without a 3 sec GC pause (use sets of primitive arrays instead of discrete objects, or large ByteBuffers with C struct like mappings as contiguous logical arrays). And slow startup times (demonstrated when increasing heap RAM min size halves startup time). How to stay in the limit of file handles (again ref counting, because we don't have RAII). How to work with jni structures (especially ones implemented with RAII in mind) - ref count (and and associated memory leaks)

With Rust, I get many of the cool MT algorithms found in a GC system (albeit, many with copy vectors and mutexs and ref counts), but it's more than I had with C++. And the performance of the mutex Btree isn't bad compared to the skiplist of java. It STILL lacks, due to reader and writer contention (thwarted by the mutex when iterating readers), but I suspect there are unsafe libraries I haven't come across yet.

I get RAII everywhere, which is just so happy making. While I AM back in fragmented memory hell, I get SLICES. Ironically, this solves my GB sized json parsing problems better than java did (vs perl). Slice parsers don't need alloc/free and thus don't have memory fragmentation. When coupled with a per request arena (single bulk allocation, with single free when done) I am as compact as a GC, fast as a GC alloc/drop, but don't have the GC phase to deal with. Granted, this is limited in usefulness, but I can start building a bags of tricks instead of bags of hacks.

SoftReference caches does still seem a missing part, I suspect an arena system would be needed, or just bite the bullet and segment cache mem from working mem, but at that point, you have memcached/redis.

Rust really does solve the mutable MT datastructure problem much better than java (which required EVERYTHING to be synchronized, or to trust they know how to properly use Atomics safely). My last major foray into java parallelism was CompletableFuture chains. In the simple case GC was fine, but rust does equally well. The problem I had was when sharing state between stages. Java forces final (immutability/const) or pushes you into pedo territory with completely unsafe object fields (where you can't know, from stage to stage, which sibling fields were updated or in an error state). Rust Fing nails that use case. I think GC with carefree read/write datastructures is BAD. AXM with no uninitialized states (eg borrow checker) is a decent ruleset for such threaded complex (interlinked) execution chains. It is HARD to write, but I trust it will work once it compiles. In java, I was always worried I needed an extra if-not-null or onException lambda. And all the new programmers cried when they saw my code (it was faster than any other code due to its on demand, reactive pipeline nature).

(Note C# and Go may address many of my java concerns, but I doubt all - stack allocated slices across threads in go doesn't seem feasible)

6 Likes

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.