In these situation where service functions in http request needs to call an API that needs to read from cache (BTreeMap underneath) and may or may not write onto it.
Does calling the lock().unwrap()
functions on the Arc while some other long running operations in between before actually modifying data managed in the arc effectively cripple down request per second performance?
Or calling lock().unwrap()
only right before it is actually used is more efficient? (The code looks a bit uglier).
Here is my simplified usage scenario.
// will be called by concurrent http requests
fn http_get_list(req: &mut Request)->Iron<Response>{
let arc: Arc<Mutex<Cache>> = Cache::extract_from_request(req);
let mut cache = arc.lock().unwrap() //get the underlying object here.
.../// some other operations and function calls....
let result = api_get_list(&mut cache);
...
}
// may or may not write to cache, and may execute longer
fn api_get_list(cache: &mut Cache, param: &str)->String{
let in_cache = cache.contains_value(param);
if use_cache && in_cache{
cache.get_value()
}
else{
let value = get_value_from_db(param);//get a copy from db (expensive operation)
cache.push(param, value) // save to cache
value
}
}
// should I pass the Arc Mutex and get the lock from here?
fn api_get_list(arc_cache: Arc<Mutex<Cache>>, param: &str)->String{
}
I'm using ironframework.