use std::{time::Duration, io::Write};
use std::net::TcpStream;
fn main() {
// now memory is 5m
std::thread::sleep(Duration::new(5, 0));
{
for i in 1..10 {
// any redis clint
let mut tcp = TcpStream::connect("127.0.0.1:6379").unwrap();
let data: Vec<u8> = vec![49; 26748528];
let _ = tcp.write_all(&data);
std::thread::sleep(Duration::new(5, 0));
}
}
// now memory is 27m
std::thread::sleep(Duration::new(500, 0));
}
When I run after TcpStream Write a big vector big than socket cache size, It will be increase memory. Is it any linux future?
No, there is no memory leak. The buffer is, semantically, allocated within the loop and de-allocated at the end of the loop. The optimizer may change this.
However, at any rate, there is no memory leak in your code.
But I linux (debian/11.2 AlmaLinux), It will increase memory to 4G, when I write a lot big data in some thread, But in windows or in mac, When I end write, the momery will be reduce to ok.
Do this memory usage increase when you increase the amount of iterations? If not, that's definitely not a leak - more probable is that allocator simply doesn't eagerly return the freed memory to the OS (since it's expected that it will be required again soon).
In iterations it will no increase memory, I use dhat to analyse it, the memory block is already release by rust, but I find the process's memory still in 27.1M, A long time ago, It still in 27M... Is it kenerl settings?
Yes, that's most likely exactly what I've said: program returned this memory to allocator, but allocator haven't got a reason (such as another free) to give it back to OS.