Freeing unused/cached physical pages in the heap?

hi, I am using rust to write a vpn client on ios, and ios has a 15Mb hard limit for vpn clients (packetTunnel service). I have flows coming in and going away, in the process doing some heap allocation/frees - but I want to be sure that the memory manager does not hold onto free physical pages beyond a limit and give them back to the OS - especially when I know its reaching the 15mb limit at which point I want to call something like 'give back as many physical pages as possible' - any way to do that ?

Modulo memory leaks, memory will be unallocated when the variables which own it go out of scope. There is no runtime garbage collection system at the language level in Rust. Your program and the libraries you depend on may hold on to memory explicitly for long periods; if this happens to such an extent that it's a problem, finding a solution is probably a case-by-case situation.

Using async involves choosing a executer, which is a runtime system, and which can build up resources as your program runs. If you're using async, you'll need some sort of backpressure to constrain it. Others in this forum can address the async scenario much better than I.

This will depend on which allocator you are using. The standard allocator does not support it, but jemalloc may.

I have sometimes wondered about this.

Typically our computers have hundreds of processes running, all allocating and freeing memory ferociously.

Is it sure that when my process does a "free" the OS immediatly makes that memory available for the processes? Or does it do some sneaky optimisation like keeping that memory aside for my process, assuming that I'm likely to want it back again pretty soon? Only handing over to someone else if it really has too.

I have no idea. Just pondering...

Hi alice, would you know what option in jemalloc will allow me to instruct jemalloc to return memory back to the OS ? Basically something like the glibc.malloc.trim_threshold tunable - Memory Allocation Tunables (The GNU C Library)

It depends on the definition of the "OS". If the allocator is from GNU Libc and running on Linux kernel, it keeps free()d memory and won't return to the kernel immediately[1]. I don't know other combinations but many general-purpose operating systems do this kind of optimization.
[1]:MallocInternals - glibc wiki

To reinforce what @yashi is saying:

Most OSes do not expose memory allocation primitives of arbitrary size, but rather only expose allocation primitives for some multiple of the OS's page size. For Linux, the primary high-level interface for system memory allocation these days is mmap, which sits on top of a more complicated combination of system calls. mmap can give you a whole page, but it can't give you eight bytes. The same is true on Windows, more or less, although the details differ. In both cases, memory can only be freed back to the OS in whole allocations, as well.

malloc and free come from the C standard library, and are required by the C spec to support small allocations. Furthermore, those allocations must be independent - making two small allocations and then freeing the first one must not invalidate the second one. In principle, the way the malloc interface is specced allows it to behave like mmap and friends do and to return a large allocation in response to a small request, but in practice, developers tend to complain when address space runs out way before they've asked for that much memory.

In practice, this usually means that libc makes large allocations from the OS on your program's behalf, and manages those allocations to satisfy malloc calls. When you free an allocation from malloc, libc likely will not release the underlying allocation back to the OS, either because it contains other malloc allocations you haven't freed yet, or because in practice if you allocated something once you'll probably need to do it again, or both.

This also glosses over a whole other complicating factor, which is that not all of your program's allocated space is in physical RAM at any one time, except by coincidence. Your OS will take pages that are allocated and written, but which haven't been touched in a while, and swap them out if needed to make space for other pages which are being used. So your large allocation might not actually be using any physical memory until you need it.

I was just implying anything that is not my Rust code and not the Rust standard allocator. So libc, or whatever, and downward.

Yes, that is what I was just starting to think must be the case. It would be crazy for the kernel to be fiddling around with billions of tiny allocations.

You remind me. When we rolled out our first embedded Linux product almost 20 years ago we made all the application's memory allocation on startup and touched every page, so that we knew the memory we wanted was actually available. Not just some "virtual promise" that would bomb out at random when the memory was first actually accessed.

ios does not have swap, so that wont happen on ios .. the RSS resident pages will just sit there belonging to the process that allocated it unless the process itself gives it back, which is why I was keen in ensuring that stdlib doesnt sit around with pages that it doesnt need

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.