Unexpected drop behavior w/ Vec<Arc<T>>

Hi, I'm experiencing some unexpected behavior with a Vec<Arc> and would really appreciate it if someone could help me fill in the gaps. When I allocate a Vec, iteratively push some large number of Arc elements to it, then drop everything, the Arcs (apparently) have a strong reference count of 0, but do not free their memory.

A minimal example is in this playground link. massif and heaptrack both show a heap size of (effectively) 0 after mk_arcs() scope ends, but htop says it's still consuming ~4% of 8 Gb while it stalls at the end. Trying to run the process iteratively bricks a VM after exhausting all of the memory, so I think htop is correct. I'm not using any custom allocators. Removing the weak reference in the playground link doesn't change the behavior.

My reading of the drop procedures for Vec and Arc are that the Vec should drop, iteratively calling drop on all its elements, which would decrement the strong_count to 0 on each arc, therefore dropping the arc + its contents and freeing the memory it was using. weak_canary in the playground link does indicate that the Arc elements have a strong_count of 0 by the time mk_arcs() scope ends.

Thanks for any help

The process's allocator implementation does not immediately release pages back to the operating system when the allocations made in them are freed.

2 Likes

An Arc<T> calls T::drop when the strong count reaches 0, but it can't release the memory associated with it until the weak count (also) reaches 0, because the strong and weak count are stored in-line with the T, in the same allocation.

What your code does is create 10,000,000 Arc<usize>s and then drop and free 9,999,999 of them. One of them is dropped (or, well, would be dropped if usize had any drop glue; in reality nothing happens) but the memory is not freed immediately because you keep a Weak around. (It will be dropped eventually when weak_canary goes out of scope at the end of main.)

sfackler's post explains why the freeing of those 9.9M allocations is not reflected by htop.

2 Likes

Thank you. Is there anything I can do to force the rust process to release the pages back to the OS other than killing it? Switching the global allocator doesn't seem to help.

You could search for a global allocator that behaves that way, but they're generally built under the assumption that if you've previously used some amount of memory you're going to do that again soon, so unmapping and remapping it just needlessly thrashes the process's virtual memory mapping.

You could maybe use jemalloc if you turn down the dirty_decay delay pretty aggressively or tweak some other settings: JEMALLOC

1 Like

Got it. I never would have thought to look at the page/OS thing as a possible culprit, so this was some really sage advice.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.