We all love Rust because it doesn't need a GC to guarantee memory safety, among other things.
One idea that keeps creeping to me is what if garbage collection can be done in a few picoseconds utilizing a hardware device that keeps track of object lifetimes either separately as in a client/server model or using additional bits added to the word to keep track of them individually?
As Rust forums is a place with a lot of professionals knowledgeable in hardware, I wanted to ask this here. I'm by no means a hardware person.
To answer your question: yes, I principle anything that can be run on a Turing Machine can have a direct hardware implementation.
However, it would be worse than pointless to do so. First off, the hardware likely wouldn't accelerate all that much, since the garbage collection algorithm is superlinear, and hardware can do nothing to reduce asymptotic complexity.
Secondly, it likely wouldn't be economical do develop some kind of ASIC for garbage collection purposes. Too large an investment, but not nearly enough return, in both the technical and economical realms.
Lastly, and most fundamentally, garbage collection is an incomplete solution to the actual problem: resource management.
Yes, RAM is a resource, but it is far from the only one. To pick some easy examples, think of sockets, or file handles: garbage collection does nothing to properly manage those, and instead just invites language-level hacks like with-statements that are unnecessarily restrictive, all things considered.
Rust's ownership and borrowing systems on the other hand, do tackle the resource management issue properly. You can get access to sockets or file handles, manipulate them, move them, store them in eg a struct, and when you're done, you just (implicitly) drop them and let the ownership/borrow systems do the actual cleanup. No fuss, no muss.
Among many crazy ideas that was tried by Intel. Over 40 years ago, in fact. It haven't worked back then, it wouldn't work today.
The real problem with GC is not where you think it is. The promise of GC “don't think about memory allocations, we have your back” is seductive. It's also wrong. I remember how we traced down unacceptable jitter in. Android app (back when Android it was using Apache Harmony) and have found that one, single, printf("%d", x) in Java would allocate (and then garbage-collect) almost 300 objects. C version would only allocate stack space.
You may use any tricks that you want imagine but if you program generates useless allocations on the industrial scale then it would always produce slow-downs. Worse: an attempt to solve excessive memory use leads to crazy tricks in the implementations that lead to strange bugs that are then fixed with something like this:
Not sure what you mean by that. Standard compacting garbage collection algorithms run in amortized time proportional to the amount of memory being allocated.
Standard compacting garbage collection introduces pauses which can be easily eliminated totally with help of hardware: introduce two RAM modes where data is either stored twice or once—that would give you the ability to instantly create snapshots—then GC can be run in software, but on entirely separate CPU core without affecting the program at all. It would be entirely separate and would spend zero time on worker CPU cores… but as I've said the real issue is not with GC, but with what GC enables: sloppy designs that create a lot of memory pressure.