Say we're gonna impl a thread-safe in-memory cache in Rust
I can think of some following code:
use std::{collections::HashMap, sync::Arc, sync::Weak};
pub struct CacheManager {
storage: HashMap<u8, String>,
}
#[derive(Debug)]
pub enum QueryResult {
Hit(String),
None,
}
impl CacheManager {
pub fn new() -> Self {
CacheManager {
storage: HashMap::new(),
}
}
pub fn query(&self, key: u8) -> QueryResult {
let storage = &self.storage;
let find = storage.get(&key);
if let Some(val) = find {
return QueryResult::Hit(val.clone());
}
return QueryResult::None;
}
pub fn insert(&mut self, key: u8, val: String) -> bool {
let storage = &mut self.storage;
if storage.contains_key(&key) {
return false;
}
let rlt = storage.insert(key, val);
assert!(rlt.is_none(), "rlt is not none");
return true;
}
}
And the user would use this cache like:
let cache_manager = Arc::new(Mutex::new(CacheManager::new()));
// ...
This seems to work fine.
But my question is:
could we encapsulate more so that our user doesn't need to call Arc::new(Mutext::new(...))
by their own? (Since we're providing a thread-safe cache)
Rather, they just call somehing like
let c_m = ThreadSafeCache::new()
c_m.aquery(&15) // atomic query
c_m.ainsert(&15, String::from("nasd")) // atomic insert
// ...
I think this level of encapsulation is necessary for the sake of safety and convenience.
Bkkgbkjb:
could we encapsulate more so that our user doesn't need to call Arc::new(Mutext::new(...))
by their own? (Since we're providing a thread-safe cache)
Rather, they just call somehing like
let c_m = ThreadSafeCache::new()
c_m.aquery(&15) // atomic query
c_m.ainsert(&15, String::from("nasd")) // atomic insert
// ...
To make this work, you just need to write a definition for ThreadSafeCache
:
pub struct ThreadSafeCache(Arc<Mutex<CacheManager>>);
impl ThreadSafeCache {
pub fn new()->Self {
ThreadSafeCache(Arc::new(Mutex::new(CacheManager::new())))
}
pub fn aquery(&self, key: u8) -> QueryResult {
self.0.lock().expect("Mutex poisoned!").query(key)
}
pub fn ainsert(&self, key:u8, val:String) -> bool {
self.0.lock().expect("Mutex poisoned!").insert(key, val)
}
}
https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=73d0829c12782131daff64d7dc913f6b
NB: I changed your API slightly by replacing some &u8
s with u8
.
You should use lock
instead of try_lock
. Otherwise you'll get spurious panics when aquery or ainsert races with themselves or each other.
1 Like
2e71828
October 9, 2021, 10:12am
#4
Thanks; I've corrected my post above.
Thanks for the reply!
Sorry to say I'm probably overly-frightened by the thread::spawn
stuff
At that time, I was just wondering: Oh are there any traits that I should impl
to make the compiler happy?
@2e71828
Sorry to say, after playing with this ThreadSafeCache
a little bit, I've found some inconvience:
Originally, users would write code like
fn main() {
let cache_manager = Arc::new(Mutex::new(CacheManager::new()));
cache_manager.lock().unwrap().insert(13, String::from("13"));
let mut threads = vec![];
for i in 1..=12 {
// !! natural & classical way to do this !!
let cm = Arc::clone(&cache_manager);
let handle = thread::spawn(move || {
println!("in thread {}, query: {:?}", i, cm.lock().unwrap().query(15))
});
threads.push(handle);
}
threads.into_iter().for_each(|h| h.join().unwrap());
}
https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=345c9469974fcdfd29cb9e0d9dbdd82b
This is fine
But after changing to ThreadSafeCache
, this way doesn't work...
fn main() {
let cache_manager = ThreadSafeCache::new();
cache_manager.ainsert(13, String::from("13"));
let mut threads = vec![];
for i in 1..=12 {
// !! We cannot call Arc::clone here !!
let cm = Arc::clone(&cache_manager);
let handle = thread::spawn(move || println!("in thread {}, query: {:?}", i, cm.aquery(15)));
threads.push(handle);
}
threads.into_iter().for_each(|h| h.join().unwrap());
}
https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=d9a7e46db1757f27cc85062c7f5584c9
Would you mind further explaining how you expect users to use this ThreadSafeCache
when sticking to std::thread
?
(In last reply, I mistakenly tried with crossbeam::thread::scope
and let cm = &cache_manager
and passed the compiler)
To explain a little bit,
I think we'd better find a way to use ThreadSafeCache
in the same old way as we have explicit Arc<Mutex<_>>
on it
Specifically, the ownship
gets passed into different threads just like what happens when we use Arc::clone
user land's api better to stay same, i.e. Arc::clone(&thread_safe_cache_manager)
, but a comprimise is also understandable
If you make ThreadSafeCacheManager
derive Clone
:
#[derive(Clone)]
pub struct ThreadSafeCache(Arc<Mutex<CacheManager>>);
then you can do this, which will clone the internal Arc
:
let cm = cache_manager.clone();
https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=343245ac222489129cedde947bcf44e5
4 Likes
Thanks for the reply!
Actually, last night after posting above 2 questions and going to bed, I also found this way somehow(on the bed, yes!)
But after some further consideration, I decide to impl a plain rc_clone
method which looks like:
impl ThreadSafeCache {
// ...
pub fn rc_lcone(&self) -> Self {
Self(Arc::clone(&self.0))
}
//...
}
Reasons:
when impl
ing std::clone::Clone
, we're sensing that this object gets a pure clone, yielding an independent object(2 cache in this case!). This may make users confused
This would stop us from impl
ing real Clone
in future
by impl
ing a rc_clone
method, we explicitly add the rc
into method name, which I think would clear users' confuse somehow
system
closed
January 8, 2022, 1:57am
#10
This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.