Durable concurrent hashmap optimized for read/write heavy

hi. for my personal project
I had to use in-memory storage with durable option.

I wrote and release it,

durableMap have two options :

RConcurrenctStorage : optimized for read heavy concurrent
WConcurrentStorage : optimized for write heavy concurrent

also it use NonBlocking wal engine to persist data to disk and after restart , load all

#[tokio::main]
async fn main() {

    
    // Optimized for write concurrent (API is same) 
    // let dl1 = WConcurrentStorage::<Document>::open("tester".to_owned(), 1000).await;
    
    // Optimized for read concurrent
    let dl = RConcurrentStorage::<Document>::open("tester".to_owned(), 1000).await;

       // insert / Update
        let _ = dl.insert_entry(format!("102xa"), Document::new()).await;

        // remove by key
        dl.remove_entry(&format("102xa")).await;
  
   
        dl.table.read().await
            .iter()
            .for_each(|(key, doc)| {
                println!("==> {} -> {}", key,  &doc.funame)
             });
}


#[derive(Serialize, Deserialize, Clone)]
struct Document {
   pub  funame: String,
   pub  age: i32,
}
impl Document {
    pub fn new() -> Self {
        Document { 
            funame: String::from("DanyalMh"), 
            age: 24, 
        }
    }
}

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.