Lazy_static without Mutexes


#1

tl;dr I want to have a global array of objects I have defined. I can do it with lazy_static but that (seems to) need a mutex. Yuk!

What I would like to have is simple. Just a Vec<Object> stored at the same scope as main. That does not compile.

or…

STORE: struct Strore {
   list:Vec<Object>
}

That does not compile.

So digging up some old code…

struct Strore {
   list:Vec<Object>
}
lazy_static! {
    static ref STORE: Mutex<Store> = {
        let store = Store{list:vec![]};
        Mutex::new(store)
    };
}

Do I need to wrap it in a mutex? This is all single threaded. Why do I need the overhead of Mutex and the risk of deadlocks?


#2

It’s because globals are, well, global in scope. Global variables don’t know how many threads of execution your program will have, so there needs to be some kind of scheme in place to make sure that access to them is safe. A Mutex is one such construct that will do the trick, but you could also rig up some other type that’s lighter in weight and unsafe impl Sync for it. Then it would be on you to make sure that your impl is actually safe.


#3

But I know how many threads I will have. One.

Sync is not much better. There will be one thread. It is not safe to share between threads, far from it.


#4

It’s the trait bound that things need to satisfy in order to be placed in a lazy_static: https://doc.rust-lang.org/std/marker/trait.Sync.html


#5

I need global access to a data structure from one thread. There will only ever be one thread.

Is that possible? Without the overhead of structures designed to facilitate multi threading


#6

Is lazy_static the correct approach?
It works, but is it designed for multi threading (so inappropriate in my case) and there is another way to have a global structure?


#7

Maybe you’d like thread_local! better? You can think of this like a global in a single-threaded app, and it doesn’t have a Sync requirement on the type. You still need something like RefCell if you want mutability though.


#8

How many millions of times per second are you going to be accessing this structure? An uncontended mutex can be locked in ~20 nanoseconds.


#9

The true overhead is deadlocks.

This is in a recursive function so locking it is very problematic. I can change the design so the locks do not collide (I think) but that is a lot of cognitive overhead


#10

Is there a reason to not use thread_local! in this case as sfackler mentioned? It’ll never lock (it only ever panics - and it allows multiple concurrent reads), and it has less overhead because it isn’t a lock.

If you are sure you’re single threaded, then thread_local! will also live as long as you need it to.


#11

Did you consider resigning from global state and passing it via arg?


#12

If you’re trying to mutably access the value from multiple places (even within a single thread), Rust will not allow you to do that. Even if you use a RefCell instead of Mutex, it will return an error when you try to borrow the value without releasing the previous borrow. So if this is the case, you’ll have to rethink your design and avoid multiple simultaneous borrows.

Note that making immutable simultaneous borrows is fine. RWLock will allow you to do it in multi-threaded context (so it will work in a static variable), and a thread-local RefCell will also work.


#13

Global state belongs in global variables.
I am using one thread.
thread_local! is working well. There is a bit of fluff…
VAR.with(|f|){fo_some_thing_with(&*f.borrow()} instead of do_some_thing_with(&VAR).
Passing a value around would touch every function signature, in some cases quite tangentially to what the function actually does.
What I am doing here is storing objects in a global array (Vector) and describing the relationships between them with the usize indexes into the array. This means I can do al sorts of things copying the index that the borrow checker would not let me do with the objects themselves.


#14

Precisely. Mostly I read from the global variable. So locking it is even more perverse!


#15

Agreed about the fluff. I usually wrap access to thread locals like this with a pair of functions if the thread local is common enough. Should at least make it a bit better?

fn with_xxx<O, F>(func: F) -> O
where
    F: FnOnce(&Xxx) -> O
{
    XXX.with(|refcell| func(refcell.borrow()))
}
fn with_xxx_mut<O, F>(func: F) -> O
where
    F: FnOnce(&mut Var) -> O
{
    XXX.with(|refcell| func(refcell.borrow_mut()))
}

Then at least it’s with_var(|f| fo_some_thing_with(f)) or with_var(fo_some_thing_with) rather than the whole incantation every time.

But I think that’s about as good as global state gets in rust - you could make a macro to automatically do the with_var(|f| ) but that would be a bit overdoing it and would obscure what’s actually happening.