Heap fragmentation

I was just thinking about this, in Rust items allocated on the heap are not relocatable, so if a program runs for a long time, and has an "unfortunate" pattern of allocating on the heap, in theory the heap could become fragmented, and memory consumption could be excessive.

I am not suggesting this is a serious problem in practice, but is this something that should be considered when writing software that is intended to run for a long time? Have you encountered any problems with heap fragmentation?

1 Like

What makes you say this? Rust items are, in general¹, designed to be relocatable due to all of the moves that the language semantics require— If you have ownership of a value, even if it's on the heap, you can move it wherever you want in memory and expect it to continue to work properly. Also, the standard collections (like Vec) periodically do move their heap allocations around when resizing.

Âą Aside from, e.g., Pin


Values can be moved by the owner, but not by the memory allocator if there is fragmentation. It is true that Vec does reallocate when resizing, and this could help to reduce fragmentation (if it is a problem). As above, I rather doubt it is a problem in practice, but I think you could write a pathological program to demonstrate the potential issue. Actually I now have a dim memory of someone having written a blog about this, but it could be a false memory.

Edit: Ah, I think this might be it:

1 Like

In my 8 years of using Rust, I've only ever had to worry about heap fragmentation once.

At a previous company, we were using WebAssembly to package up the code for some data processing stuff, and part of the flow requires allocating space for the host to write tensor data into. What I found was that we'd allocate a ~1MB tensor full of image data, and the WebAssembly instance's memory usage would grow linearly over time until it OOM'd.

It turned out we had a weird sequence of allocations that the allocator didn't handle very well:

  1. Allocate 1MB for the tensor
  2. Run the first operation
  3. Allocate something small
  4. Allocate a new 1MB tensor for the next iteration
  5. Free the first tensor
  6. Free something small
  7. Repeat

It was only after wrapping the global allocator and printing addresses on every alloc/free/realloc that I noticed the addresses increasing monotonically and realised I was seeing heap fragmentation where the allocator was forced to keep asking the host for more memory because it couldn't reuse the previous space (even if there was only technically ~2MB allocated at any one time).

Once I knew the problem, the fix was dead simple - stop using the wee_alloc crate because it's a poor allocator and prone to these sorts of leaks.


Really? How?

If my program does:
Allocate a big thing.
Allocate a small thing.
Allocate another big thing.
Deallocate the small thing.

Then how do I then defragment memory by moving the second big thing into the now free space of the small thing?

1 Like

I never actually meant to imply anything about the allocator's capabilities in this regard, but only to push back on the idea that Rust objects aren't relocatable— They are, and it's baked into the language at a pretty fundamental level. It's one of the main reasons that self-references are so problematic.

I just forgot to mention the caveat that to successfully move an object from one location to another you have to somehow acquire an allocation for the new location. Assuming you have an appropriate allocator installed, the simplest way to do this in practice is call realloc for the second big thing's allocation with the size it already has. That will give the allocator permission to move it somewhere else, which it can use to compact memory.


I don't understand that. Surely for the allocator to compact memory it would have to move things around in memory. Wouldn't that break all the references to things in said memory? One can't just move a vec around in memory on a whim.

Sure you can, and Vec does it all of the time. You're correct that it will break any extant references, but Rust has specialized machinery to ensure that none of those exist: the borrow checker.


I still don't get it. I appreciate that a vec may be moved when I try to grow it. And I appreciate that the borrow checker will ensure no reference get broken in the process.

However I don't see how that helps if I have arrived at a fragmented heap situation. Given that I try to create or grow a large vec but the available memory is only scattered around in small packets there is no way to compact all that to make the required space without moving other objects in my program around. Which can't be done because they may have references themselves.

If a Vec can be mutated, it means the underlying memory is not shared in any way, which allows re-allocation ( and allows the Vec to grow ). However for shared allocations, for example Arcs with a strong count > 1, the memory cannot be mutated or re-allocated. This is how Rust works.

I think in a long running program, if you have something like a cache, you can periodically clear the cache ( freeing all the memory and allowing memory allocation to begin again ). Or perhaps you can loop back to the very beginning of the program, meaning every value gets dropped ( unless you are using some kind of static ), and start again. Basically a "soft" restart. But with a good memory allocator, it is fairly unlikely you will have a problem anyway I think.

With a long lived cache of variable sized items you can definitely have fragmentation problems, given enough time -- servers can stay up for weeks or longer. This is a well known problem and restarting servers periodically is sometimes use to manage it. With a cache of fixed sized items this is almost never a problem.

With modern allocators and 64-bit virtual memory (well, in practice it's usually 48 bits, but alas) I would say that heap fragmentation is not a big issue in practice. If object is big enough, then it gets to live in its own memory mapping, while smaller objects live in their own regions managed by allocator together with other objects of similar size. In a bad case of fragmentation you will get a bunch of regions populated with only one object. Yes, it results in a bunch of "wasted" RAM, but those regions can be reused for new objects, so it's better to view this memory as over-reservation of RAM by an allocator to improve efficiency of future allocations.


So how about the good old embedded systems technique of grabbing all the memory you need up front and keeping of it.

For example rather than allocating and deallocating vecs just keep them around after you have used them. v.clear will empty them but hang on to the memory.

You might find the new allocator API interesting, then. It allows you to implement stuff like bump allocators, which can more or less do the same thing.


This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.