Hashmap<K, Weak<V>> that auto drops the 0-strong-count Weak<V>'s?

Is there something like HashMap<K, Weak<V>> , but which it auto drops the (K, Weak<V>) pairs when the Weak<V> hits a strong-count of 0 ?

Does weak-table do what you want?

You could implement it yourself. I see 3 strategies:

  1. Wrap get, and only remove entries lazily when upgrade of the value fails.

  2. From time to time, run hashmap.retain and drop entries with Weak::strong_count(v) == 0. You may need to run hashmap.shrink_to_fit() too to free excess memory. This is going to be expensive, so run it every X inserts or every X minutes.

  3. Use Arc<Mutex<HashMap<Arc<K>, Weak<NotifyOnDrop<V>>>>> and wrap values in a newtype that will try to immediately remove entries from the hashmap when the value is dropped (Arc runs destructors even when Weak references exist). As you can see, injecting an active observer into this is quite messy.


It's almost what I want, except cleaning out weak ptrs with 0-strong count seems to be O(n), which is rather expensive if called frequently.

The third option need not be O(n), because it can keep track of which keys should be removed (by sending them to a channel which is read by the cleanup operation, perhaps).

Also, if you run the sweep only when enough inserts have happened to double the length, it is amortized O(1), which may be good enough.

I was replying to @user16251 's weak-table crate WeakValueHashMap in weak_table - Rust which are documented to e O(n). I was note replying to @kornel 's three strategies.

I agree with you that the gist of "each obj keeps a Rc to the hashtable and removes self in destructor" can do it in O(1).

I think that's what the weak-table crate does, based on my skimming of the source code.

There's also no need to run the entire sweep as an atomic operation. If you're implementing the hashtable yourself, you can maintain a sweep pointer over the underlying vector that you advance a few places on every insert.


This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.