Understand Memory leak with Parkinglot and Hashmap

I was testing one program with Crossbeam Skiplist. I tried to use RWLock (Parking) with Hashmap as well But looks like memory is not being released. I tried with jemalloc but still same issue. Calling add on same set of keys is again just increasing the memory.

I know the not all the freed memory is return to os by allocator but if i am clearing and reinserting the keys it should not consume more memory.

Code

use std::collections::HashMap;
use actix_web::{HttpServer, web, get, App};


use parking_lot::RwLock;

#[actix_web::main]
async fn main() -> std::io::Result<()> {
    HttpServer::new(|| {
        App::new()
            .app_data(web::Data::new(AppState {
                data: RwLock::new(HashMap::new())
            }))
            .service(add)
            .service(clear)
    })
        .bind(("127.0.0.1", 8080))?
        .run()
        .await
}

struct AppState {
    data: RwLock<HashMap<String, String>>,
}


#[get("/add")]
async fn add(data: web::Data<AppState>) -> String {
    let max_entries = 100663296 as u64;

    let mut m = data.data.write();
    for i in 0..max_entries / 10 {
        m.insert(format!("str-{i}"), format!("str-{i}-{i}"));
    }

    let size = m.len();

    format!("count {size}! from thread {:?}\n", std::thread::current().id())
}

#[get("/clear")]
async fn clear(data: web::Data<AppState>) -> String {
    let mut m = data.data.write();
    m.clear();
    let size = m.len();

    format!("countt {size}! from thread {:?}\n", std::thread::current().id())
}

Cargo toml

[package]
name = "skiptest"
version = "0.1.0"
edition = "2021"


[dependencies]
actix-web = "4"

parking_lot = "0.12.1"

Any help would be greatly appreciated.

Calling clear on a hash map does not actually release the memory. Try calling HashMap::capacity after clearing it.

You could then use a method like HashMap::shrink_to_fit or HashMap::shrink_to to reduce the capacity, freeing part of the allocated memory.

Two things

  1. If i am inserting the same elements again and again memory is getting increased by double
  2. Even i replace my clear function code with following and still memory is not released
let mut m = data.data.write();
 *m = HashMap::with_capacity(20);

From the actix_web docs for app_data

HttpServer::new accepts an application factory rather than an application instance; the factory closure is called on each worker thread independently. Therefore, if you want to share a data object between different workers, a shareable object needs to be created first, outside the HttpServer::new closure and cloned into it. Data<T> is an example of such a sharable object.

Modifying your main function to comply with that gets the behavior you'd expect

let data = web::Data::new(AppState {
    data: RwLock::new(HashMap::new()),
    sys: Mutex::new(System::new_with_specifics(
        RefreshKind::new().with_processes(ProcessRefreshKind::new()),
    )),
});

HttpServer::new(move || {
    App::new()
        .app_data(data.clone())
        .service(add)
        .service(clear)
        .service(stats)
})

You were calling clear on a different HashMap than the one that had been added to because different threads were servicing the requests and each thread was creating it's own private AppState.

@semicoleon I tried the following code but still has the same behavior. No memory release.

use std::alloc::System;
use std::collections::HashMap;
use std::sync::{Arc, Mutex};
use actix_web::{HttpServer, web, get, App};


use parking_lot::RwLock;

#[actix_web::main]
async fn main() -> std::io::Result<()> {
    let data = web::Data::new(AppState {
        data: RwLock::new(HashMap::new()),
    });
    HttpServer::new(move || {
        App::new()
            .app_data(data.clone())
            .service(add)
            .service(clear)
    })
        .bind(("127.0.0.1", 8080))?
        .run()
        .await
}

struct AppState {
    data: RwLock<HashMap<String, String>>,
}


#[get("/add")]
async fn add(data: web::Data<AppState>) -> String {
    let max_entries = 100663296 as u64;

    let mut m = data.data.write();
    for i in 0..max_entries / 2 {
        m.insert(format!("str-{i}"), format!("str-{i}-{i}"));
    }

    let size = m.len();

    format!("count {size}! from thread {:?}\n", std::thread::current().id())
}

#[get("/clear")]
async fn clear(data: web::Data<AppState>) -> String {
    let mut m = data.data.write();
    m.clear();
    let size = m.len();

    format!("countt {size}! from thread {:?}\n", std::thread::current().id())
}

When I run that code on Windows 11, checking Task Manager for the memory values

State Memory
Started 2.0 MB
Added 6,849.5 MB
Cleared 3,160.1 MB

Replacing m.clear(); with *m = HashMap::new();

State Memory
Started 2.0 MB
Added 6,849.4 MB
Cleared 24.3 MB

How are you checking the memory usage? It seems to be working as expected to me

Hi @semicoleon I am checking memory via htop. I performed the two runs , following are the values

Using map.clear()

Started - 4.       KB
Added   - 6828 MB 
Clear   - 6828 MB 

Replacing map with new one ( *m = HashMap::new();)

Started   - 4.       KB
Added     - 6828 MB 
Replace   - 3692 MB 
Added     - 10.3 GB
Replace   - 7.4  GB

Looks like only half of the memory is being freed.

I am running the binary in release mode in aws ec2 instance with centos.

If I changed the add code to following still memory is being increased by 3GB per add call.

#[get("/add")]
async fn add(data: web::Data<AppState>) -> String {
    let max_entries = 100663296 as u64;
    let mut m = HashMap::new();
    for i in 0..max_entries / 2 {
        m.insert(format!("str-{i}"), format!("str-{i}-{i}"));
    }
    let size = m.len();
    format!("count00 {size}! from thread {:?}\n", std::thread::current().id())
}

Hmm can you try this version that just reads from stdin and see how it behaves?

use std::{collections::HashMap, time::Duration};

use bytesize::ByteSize;
use parking_lot::RwLock;
use sysinfo::{get_current_pid, ProcessExt, ProcessRefreshKind, RefreshKind, System, SystemExt};

fn main() {
    let lock = RwLock::<HashMap<String, String>>::new(HashMap::new());
    let mut buffer = String::new();
    let mut sys =
        System::new_with_specifics(RefreshKind::new().with_processes(ProcessRefreshKind::new()));

    let pid = get_current_pid().unwrap();
    loop {
        sys.refresh_process(pid);
        {
            let map = lock.read();
            let proc = sys.process(pid).unwrap();
            println!(
                "Length: {}, Capacity: {}, Memory: {}, Virtual: {}",
                map.len(),
                map.capacity(),
                ByteSize::b(proc.memory()).to_string_as(true),
                ByteSize::b(proc.virtual_memory()).to_string_as(true)
            );
        }

        println!("1: Add");
        println!("2: Clear");

        buffer.clear();
        std::io::stdin().read_line(&mut buffer).unwrap();

        match buffer.trim() {
            "1" => add(&lock),
            "2" => clear(&lock),
            other => println!("Unrecognized option: {other}"),
        }
    }
}

fn add(lock: &RwLock<HashMap<String, String>>) {
    let max_entries = 100663296 as u64;

    let mut m = lock.write();
    for i in 0..max_entries / 10 {
        m.insert(format!("str-{i}"), format!("str-{i}-{i}"));
    }

    let size = m.len();

    println!(
        "count {size}! from thread {:?}",
        std::thread::current().id()
    );
}

fn clear(lock: &RwLock<HashMap<String, String>>) {
    let mut m = lock.write();
    m.clear();
    let size = m.len();

    println!(
        "countt {size}! from thread {:?}",
        std::thread::current().id()
    )
}

using sysinfo = "0.28.4" and bytesize = "1.1.0" there to make reporting the basic stats easier

Hi @semicoleon here is output of 3 runs that i did.

Run-1

- for i in 0..max_entries / 10 
- m.clear()


Length: 0, Capacity: 0, Memory: 3.3 MiB, Virtual: 1.0 GiB

1: Add
2: Clear
1
Length: 10066329, Capacity: 14680064, Memory: 1.4 GiB, Virtual: 2.4 GiB

1: Add
2: Clear
2
Length: 0, Capacity: 14680064, Memory: 1.4 GiB, Virtual: 2.4 GiB  
(No memory release after vlearing map elements)

Run-2

- for i in 0..max_entries / 10 
- *m = HashMap::new();


Length: 0, Capacity: 0, Memory: 3.2 MiB, Virtual: 1.0 GiB

1: Add
2: Clear
1

Length: 10066329, Capacity: 14680064, Memory: 1.4 GiB, Virtual: 2.4 GiB

1: Add
2: Clear
2

Length: 0, Capacity: 0, Memory: 618.9 MiB, Virtual: 1.6 GiB

Run-3

- for i in 0..max_entries / 2 
- *m = HashMap::new();


Length: 0, Capacity: 0, Memory: 3.3 MiB, Virtual: 1.0 GiB

1: Add
2: Clear
1
Length: 50331648, Capacity: 58720256, Memory: 6.7 GiB, Virtual: 7.7 GiB

1: Add
2: Clear
2
Length: 0, Capacity: 0, Memory: 3.6 GiB, Virtual: 4.6 GiB

I can recommend the dhat crate. It's let's you replace the global allocator and gives you a detailed allocation report that you can view in a web viewer.
The setup is very minimal.

I'm not exactly sure it applies to your problem, but it shows you where allocations happen with stack traces.

You can try calling malloc_trim after you clear/reset the map to see if that gets you more memory back.

Another thing to try would be modifying the CLI version to remove anything that could possibly allocate between when you clear the map and check the memory.

1 Like

This worked @semicoleon . Thanks a lot.

In actix web version, without trim here is the memory usage on 30GB machine. It eventually stopped allocating more memory after 17.5 GB for the same runs.

 curl --location --request GET '127.0.0.1:8080/add'
count 50331648! from thread ThreadId(18)
 curl --location --request GET '127.0.0.1:8080/stats'
Length: 50331648, Capacity: 58720256, Memory: 6.7 GiB, Virtual: 8.3 GiB
 curl --location --request GET '127.0.0.1:8080/clear'
countt 0! from thread ThreadId(20)
 curl --location --request GET '127.0.0.1:8080/stats'
Length: 0, Capacity: 58720256, Memory: 6.7 GiB, Virtual: 8.3 GiB
 curl --location --request GET '127.0.0.1:8080/add'
count 50331648! from thread ThreadId(22)
 curl --location --request GET '127.0.0.1:8080/stats'
Length: 50331648, Capacity: 58720256, Memory: 10.3 GiB, Virtual: 11.8 GiB
 curl --location --request GET '127.0.0.1:8080/clear'
countt 0! from thread ThreadId(24)
 curl --location --request GET '127.0.0.1:8080/stats'
Length: 0, Capacity: 58720256, Memory: 10.3 GiB, Virtual: 11.8 GiB
 curl --location --request GET '127.0.0.1:8080/add'
count 50331648! from thread ThreadId(20)
 curl --location --request GET '127.0.0.1:8080/stats'
Length: 50331648, Capacity: 58720256, Memory: 13.9 GiB, Virtual: 15.4 GiB
 curl --location --request GET '127.0.0.1:8080/clear'
countt 0! from thread ThreadId(23)
 curl --location --request GET '127.0.0.1:8080/stats'
Length: 0, Capacity: 58720256, Memory: 13.9 GiB, Virtual: 15.4 GiB


curl --location --request GET '127.0.0.1:8080/stats'
Length: 50331648, Capacity: 58720256, Memory: 17.5 GiB, Virtual: 18.9 GiB



After 17.5 GiB - it stopped allocating more memory

I will go back and try with my original problem statement to run same thing with crossbeam skipmap.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.