When is efficient to lock the Arc<Mutex<_>> in context with HTTP Request

In these situation where service functions in http request needs to call an API that needs to read from cache (BTreeMap underneath) and may or may not write onto it.

Does calling the lock().unwrap() functions on the Arc while some other long running operations in between before actually modifying data managed in the arc effectively cripple down request per second performance?

Or calling lock().unwrap() only right before it is actually used is more efficient? (The code looks a bit uglier).

Here is my simplified usage scenario.

// will be called by concurrent http requests
fn http_get_list(req: &mut Request)->Iron<Response>{

    let arc: Arc<Mutex<Cache>> = Cache::extract_from_request(req);
    let mut cache = arc.lock().unwrap() //get the underlying object here.
    .../// some other operations and function calls....
    let result = api_get_list(&mut cache);
    ...
}

// may or may not write to cache, and may execute longer
fn api_get_list(cache: &mut Cache, param: &str)->String{
    let in_cache = cache.contains_value(param);
    if use_cache && in_cache{
        cache.get_value()    
    }
    else{
        let value = get_value_from_db(param);//get a copy from db (expensive operation)
        cache.push(param, value) // save to cache
        value
    }
}

// should I pass the Arc Mutex and get the lock from here?
fn api_get_list(arc_cache: Arc<Mutex<Cache>>, param: &str)->String{
    
}

I'm using ironframework.

I would keep the time it's locked as short as possible. Especially since you are using a Mutex and not an RwLock. An RwLock allows several readers at the same time, so it allows more parallel use, while a Mutex requires exclusive use.

2 Likes

Thanks, I've change it to Arc<RwLock<Cache>> and passing it around as Arc<RwLock<Cache>> so I can obtain a lock closes to when it is actually used.

I also found that ironframework has already supported this RwLock with persistent::State

Note that writing to an RwLock will still require exclusive access. It works the same way as &T v.s. &mut T.

I like the RwLock because of the concurrent read access...but you really really really want to keep the length of the lock time down, otherwise you WILL bottleneck there. Your design should be such that you can keep the locks short, even if that means acquiring the lock multiple times while processing a request.

This is especially true if the delay is because of IO. What will end up happening is your CPU will be idle while all the threads are waiting on the network/disk to do whatever it needs to do. If the delay is because of intensive CPU operations, this is probably less of a problem since the CPU is a limited resource and the threads could back up behind that if you had the lock or not.