I am trying to understand rust's memory allocation working.
I am experimenting with creating a vec on a small linux VM.
I am trying:
let size_as_u8: usize = (size_definition*1024*1024).try_into().unwrap();
let mut hog = vec![0u8; size_as_u8];
This creates an allocation of size_as_u8 bytes, but does not page in any memory (except for one page)).
VM total memory is 2G, swap is 2G.
If I set size_definition to 3000 (approx 3G) this succeeds.
If I set size_definition to 4000, this results in:
memory allocation of 4194304000 bytes failed
Aborted (core dumped)
The memory allocation failing is consistent with linux memory limits (allocating more than available memory and swap should fail), overcommit_memory is set to 0.
What I am trying to figure out is if rust did stop this allocation itself without the OS telling it, or if it did allocate the memory, gotten ENOMEM, and as a result of that abort the execution.
If it did get ENOMEM, how can I catch ENOMEM (in gdb), and where is the code that handles this?
You are right. I realised I could verify my own hypothesis by setting overcommit on linux to always overcommit (echo 1 > /proc/sys/vm/overcommit_memory), and then the allocation succeeds.
If I enable cores being written and look at the backtrace of the core file, I see rust calling abort because it got the ENOMEM signal, and it seems to be the rust convention to abort if it did run out of memory. But I would like to find the part of rust that captures the ENOMEM from the OS.