Using try_lock api for shared concurrency

Iam trying to implement a basic caching system which requires me to have a data field wrapped inside Arc<Mutex<HashMap<City, Temp>>>
let's say I have two REST api endpoint available encapsulated inside fetch_all and subscribe method respectively, first method call returns very heavy Hashmap of city data and second method returns a stream which periodically fetch the latest updates from the server and insert's it into cache

the problem Iam facing in the design is how to ensure that upon spawing a tokio::task::spawn for both methods, how should I ensure that neither locks other one out totally and there should exists some fairness !

let task1 = tokio::task::spawn(async move  {
             for values in response {
                  match shared_hm.try_lock() { 
let task2 = ... .

does the approach of locking for writing access is good ? using try_lock method or are there any issues with this approach ?

Using a Mutex in an async task is fine as long as the operation performed with the lock held is very short and non-blocking, and the lock is not held across an await. If that is the case, then I would just use lock. Using try_lock won't improve fairness and will increase the chances of starvation.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.