Is Debian/Alma TcpStream may memory leak?

There is my Code

use std::{time::Duration, io::Write};
use std::net::TcpStream;

fn main() {
    // now memory is 5m
    std::thread::sleep(Duration::new(5, 0));
    {
        for i in 1..10 {
            // any redis clint
            let mut tcp = TcpStream::connect("127.0.0.1:6379").unwrap();
            let data: Vec<u8> = vec![49; 26748528];
            let _ = tcp.write_all(&data);
            std::thread::sleep(Duration::new(5, 0));
        }
    }
    // now memory is 27m
    std::thread::sleep(Duration::new(500, 0));
}

When I run after TcpStream Write a big vector big than socket cache size, It will be increase memory. Is it any linux future?

In Rust 1.65.0 and x86_64-unknown-linux-gnu build

No, there is no memory leak. The buffer is, semantically, allocated within the loop and de-allocated at the end of the loop. The optimizer may change this.
However, at any rate, there is no memory leak in your code.

3 Likes

But I linux (debian/11.2 AlmaLinux), It will increase memory to 4G, when I write a lot big data in some thread, But in windows or in mac, When I end write, the momery will be reduce to ok.

Do this memory usage increase when you increase the amount of iterations? If not, that's definitely not a leak - more probable is that allocator simply doesn't eagerly return the freed memory to the OS (since it's expected that it will be required again soon).

2 Likes

In iterations it will no increase memory, I use dhat to analyse it, the memory block is already release by rust, but I find the process's memory still in 27.1M, A long time ago, It still in 27M... Is it kenerl settings?

Yes, that's most likely exactly what I've said: program returned this memory to allocator, but allocator haven't got a reason (such as another free) to give it back to OS.

1 Like

But online process, It's memory will be 7G, and cause out of memory, I won't it to cache so many memory. How can I optimizer it?

Is any one have this problem, Now my memory is out of control