Unsafe cache: Sanity check

I want to cache a huge number of items very quickly and space efficiently.
So I wrote a cache that consists of an Box<Vec<u8>> for storing the state in compressed form and a
HashMap<StateKey, StateInfo> where the StateKey has indices into the vec and StateInfo is just the part of the State i am interested in storing.
This is self referential and therefore i needed some unsafe in the implementation of StateKey.
I thought about it for quite some time and i think this should be safe but i am no unsafe guru so the question is, is this actually safe?

I would also be interested in safe alternatives that don't result in loosing a lot of perf.
I wish i could use bumpalo but I need the ability drop the most recent allocation.

I've only skimmed your code, but I'm wondering why you couldn't use an Rc for your storage to avoid the unsafe?

I would have to use Arc since the cache needs to be send. Sadly that adds both space as well as runtime overhead. I benchmarked this before but cannot find the numbers anymore.

Also if I could live with adding an allocation for every key in the hashmap i could simply use a Vec<u8> or Box<[u8]> as a key and be done with it.
I just realized thats not true.

Given this is used with multiple threads, if say your unsafe code is definitely dangerous, since you could access the vec after it's been freed by extend.

get_info takes &mut self so this should not be a problem

But the StateKeys don't have a lifetime limit, so a StateKey could still be hashed while get_info is running.

Why not just check on calculate_key if a key is already present, so the range will uniquely define a key. Then you don't need to have storage be part of key, and you don't need unsafe code. The range itself would uniquely define a key for purposes of hashing and equality checking.

1 Like

I don't understand how do you imagine this check to work? I need to check using the HashMap and to do that i need the key.

Your code is wrong, this "safe" code has UB

let mut sc = StateCache::new();
let key = sc.calculate_key(&state);
drop(sc);
assert_eq!(key, key);

Try running it with MIRI, which is under tools in playground. MIRI detects many forms of UB, so if it fails MIRI, it is definitely qromg. Note: passing MIRI does not make it correct, as MIRI doesn't catch all UB.

3 Likes

I was imagining a brute force search through the vec, but if that was prohibitively expensive you could create a map from a hash of the compressed (or uncompressed) data to a set of ranges in your vec.

I see i am afraid that is not feasible then. Thanks for taking the time to think about my problem.

You are right this is definitely a problem. But luckily it is only a problem in the playground code. In the real code the cache is in its own module and since calculate_key is not pub nobody can actually write this code. In fact no StateKey can ever leave its cache which makes me think it is safe. I could of course make calculate_key unsafe to signal the extra bit of invariant you have to check but I am not sure how much that buys me. But maybe I should make sure that the cache field gets dropped before data although i dont see why the hashmap should call hash or equals on drop. Also thanks for your time.

If you are using this in a threaded environment, it would definitely be unsafe. E.g. if you add something to the list while reading from it, things might go very bad if the vector needs to reallocate.

There's also the thing where the &mut self in either method on StateCache would alias with the raw pointer.

If you are single-threaded, just use an Rc<RefCell<Vec<u8>>>. The fact that these types are single-threaded can make them very very fast. If you are multi-threaded, you should find some entirely different solution.

I mean anyway, what problem are you actually trying to solve here? Why can't your cache just be a hash map without the fancy byte stuff? If you are multi-threaded you can use one of the threadsafe hashmaps such as chashmap or evmap.

The Rc<RefCell<Vec<u8>>> approach is what i started out with since the cache is used only on a single thread. But i still would like to send the cache to a different thread wich makes the RefCell problematic.

The cache is used during a branch&bound algorithm to cut redundant branches. This cache is the hottest part of my code according to the profiler. I already changed the Hash function to ahash which gave a nice speedup.

So a simple HashMap<State, StateInfo> is bad because State is too big and i would need to Clone it.
I could use HashMap<StateKey, StateInfo> with struct StateKey(Vec<u8>) or struct StateKey(Box<[u8]>) but this requires an allocation which is too slow.
I can make struct StateKey([u8;MAX_SIZE]) but having to choose MAX_SIZE at compile time is difficult and whatever i choose i will always waste some Memory.

I am currently investigating the last option.

Looks like I am going to go with the last option. Perf is good code is safe and I guess I can live with a panic when the MAX_SIZE was to small. Thank you all!

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.