How is state held outside a given function in Rust? Static mut? How do I alter this function code to allow external control?

I am unfamiliar with Rust but comfortable with languages like C++, C#, and Javascript.

SAMPLE CODE

Rapier is a physics simulator you can use in Rust. An example of setting up a simulation is given by:

https://www.rapier.rs/docs/user_guides/rust/getting_started

use rapier3d::prelude::*;

fn main() {
    let mut rigid_body_set = RigidBodySet::new();
    let mut collider_set = ColliderSet::new();

    /* Create the ground. */
    let collider = ColliderBuilder::cuboid(100.0, 0.1, 100.0).build();
    collider_set.insert(collider);

    /* Create the bounding ball. */
    let rigid_body = RigidBodyBuilder::dynamic()
        .translation(vector![0.0, 10.0, 0.0])
        .build();
    let collider = ColliderBuilder::ball(0.5).restitution(0.7).build();
    let ball_body_handle = rigid_body_set.insert(rigid_body);
    collider_set.insert_with_parent(collider, ball_body_handle, &mut rigid_body_set);

    /* Create other structures necessary for the simulation. */
    let gravity = vector![0.0, -9.81, 0.0];
    let integration_parameters = IntegrationParameters::default();
    let mut physics_pipeline = PhysicsPipeline::new();
    let mut island_manager = IslandManager::new();
    let mut broad_phase = DefaultBroadPhase::new();
    let mut narrow_phase = NarrowPhase::new();
    let mut impulse_joint_set = ImpulseJointSet::new();
    let mut multibody_joint_set = MultibodyJointSet::new();
    let mut ccd_solver = CCDSolver::new();
    let mut query_pipeline = QueryPipeline::new();
    let physics_hooks = ();
    let event_handler = ();

    /* Run the game loop, stepping the simulation once per frame. */
    for _ in 0..200 {
      physics_pipeline.step(
          &gravity,
          &integration_parameters,
          &mut island_manager,
          &mut broad_phase,
          &mut narrow_phase,
          &mut rigid_body_set,
          &mut collider_set,
          &mut impulse_joint_set,
          &mut multibody_joint_set,
          &mut ccd_solver,
          Some(&mut query_pipeline),
          &physics_hooks,
          &event_handler,
      );

    let ball_body = &rigid_body_set[ball_body_handle];
    println!("Ball altitude: {}", ball_body.translation().y);
    }
}

So basically as you see there we are creating a list of "objects" including the main one physics_pipeline which is stepped forward in basically a for loop for a few hundred steps there.

GOAL

But what if we actually want to run this continuously at a certain time rate (not just in a for loop like there)? And either advance it or turn it on/off via external trigger?

1) CREATE PHYSICS OBJECTS AS STATIC MUT

Like I would intuitively want to re-write this as:

use rapier3d::prelude::*;

static mut rigid_body_set: RigidBodySet;
static mut collider_set: ColliderSet;
static mut integration_parameters: IntegrationParameters;
static mut physics_pipeline: PhysicsPipeline;
static mut island_manager: IslandManager;
static mut broad_phase: DefaultBroadPhase;
static mut narrow_phase: NarrowPhase;
static mut impulse_joint_set: ImpulseJointSet;
static mut multibody_joint_set: MultibodyJointSet;
static mut ccd_solver: CCDSolver;
static mut query_pipeline: QueryPipeline;

fn initialize_simulation() {
unsafe {
    rigid_body_set = RigidBodySet::new();
    collider_set = ColliderSet::new();

    /* Create the ground. */
    let collider = ColliderBuilder::cuboid(100.0, 0.1, 100.0).build();
    collider_set.insert(collider);

    /* Create the bounding ball. */
    let rigid_body = RigidBodyBuilder::dynamic()
        .translation(vector![0.0, 10.0, 0.0])
        .build();
    let collider = ColliderBuilder::ball(0.5).restitution(0.7).build();
    let ball_body_handle = rigid_body_set.insert(rigid_body);
    collider_set.insert_with_parent(collider, ball_body_handle, &mut rigid_body_set);

    /* Create other structures necessary for the simulation. */
    integration_parameters = IntegrationParameters::default();
    physics_pipeline = PhysicsPipeline::new();
    island_manager = IslandManager::new();
    broad_phase = DefaultBroadPhase::new();
    narrow_phase = NarrowPhase::new();
    impulse_joint_set = ImpulseJointSet::new();
    multibody_joint_set = MultibodyJointSet::new();
    ccd_solver = CCDSolver::new();
    query_pipeline = QueryPipeline::new();
}
}

fn advance_simulation(){
unsafe {
    let gravity = vector![0.0, -9.81, 0.0];
    let physics_hooks = ();
    let event_handler = ();

    /* Run the game loop, stepping the simulation once per frame. */
        physics_pipeline.step(
            &gravity,
            &integration_parameters,
            &mut island_manager,
            &mut broad_phase,
            &mut narrow_phase,
            &mut rigid_body_set,
            &mut collider_set,
            &mut impulse_joint_set,
            &mut multibody_joint_set,
            &mut ccd_solver,
            Some(&mut query_pipeline),
            &physics_hooks,
            &event_handler,
        );

        let ball_body = &rigid_body_set[ball_body_handle];
        println!("Ball altitude: {}", ball_body.translation().y);
}
}

The idea is I want to be able to keep state and invoke the updates externally, as I will be triggering this from elsewhere. Is this what is intended for Rust?

Ie. If I want to change the above top demo code to have a separate initialize and advance function like this, that can then be externally controlled, is this how it must be done?

I get warnings on any usage of modifying even static mut magic_int: i32 = 5; in Rust playground code. I read that static mut is terrifyingly unsafe.

But Rapier is used as the physics engine for Bevy a game engine. How can it run such a Physics simulation in a game engine without holding state like I describe in my modified code?

2) SWITCH TO WHILE LOOP WITH WAITING AND BOOL ON/OFF

An alternative approach would be to make the original for loop operation a while loop instead and let it run infinitely (unless disrupted). We could set a static mutable atomic bool for should_run_sim which can be turned on/off. We would now need a wait command at the end to make it wait before re-looping since it is now controlling the timing of its own loop.

Then the for loop from the original demo code becomes (pseudo-code):

while (1<2){ //always true
    if (should_run_sim){
        //run the simulation code from the top for loop here
    }
    //delay before looping again
    let delay = time::Duration::from_secs(1/60.0);
    thread::sleep(delay);
}

And then we can switch the sim on and off by external influence by changing the bool.

Benefit here is we only need one static mut (the bool). Now we need it to run its own "wait" function to increment at the desired time rate.

QUESTION

What is the correct solution? Are both these approaches valid? Or is there something else I'm not thinking of?

Edit: I think I must keep the static mut to minimum and wrap them in Arc Mutexes.

So maybe my best approach is to use the second option where I am letting it loop but controlling it on/off with a Mutex bool.

1 Like

Please avoid using static mut, it is too easy to shoot yourself in the foot with that. Create the object you'd put into the static mut early in your program and pass it by mutable reference to the functions that need mutable access to said object.

Rust has the loop {} expression for this.

Note that by putting Arc<Mutex<_>> inside the static slot, you don't need the static to be mutable, because with Arc<Mutex<_>> you get multithreaded interior mutability.

8 Likes

Thanks @jofas. But that only works so far. One could perhaps do as you say if the Rust code is completely self contained and everything is just passing references into everything else inside.

But if I have an external program that is controlling this Rust program, it must set data into the simulation and retrieve it from it. Which means I need some manner of setting state into the Rust functions and out of them.

It appears option #2 for my code example is best, as I only need (in that simple case) one bool to turn the simulation on/off. This can be improved by using Arc Mutex I think?

I can presumably make this a static mut Arc Mutex bool called 'on_off_switch` (however you create such a thing?)

Ie. I can have something like this:

static mut Arc Mutex String sim_state; //to write state and read from it for simulation (say json)
static mut Arc Mutex bool on_off_switch; //however you declare this thing. how? initialize it here also?

fn set_on_off(on_or_off : bool){
    let mut bool_switch = on_off_switch.lock().unwrap();
    bool_switch = on_or_off; //updates the switch I think
}

fn get_room_state(){
    let sim_state_str = sim_state.lock().unwrap();
    return sim_state_str; //to get out the room state out from this system
}

And for the running loop:

while (1<2){ //always true
    let mut bool_switch = on_off_switch.lock().unwrap(); //lock it while simulating if needed
    if (bool_switch){
        //run the simulation code from the top for loop here

        //write new room_state at end
        let sim_state_str = sim_state.lock().unwrap(); //lock it to write to it 
        sim_state_str = new_sim_state; //update it now here

    }
    //delay before looping again
    let delay = time::Duration::from_secs(1/60.0);
    thread::sleep(delay);
}

I suppose that is the best solution. In this case, I am maintaining thread safety right? While still allowing writing/reading in/out of data into the program from external control.

What would be the objects I am describing above in real code?

ie.

static mut Arc Mutex String sim_state; 
static mut Arc Mutex bool on_off_switch;

I am referring to the types described here but I don't know the correct syntax to make them or declare them in a sense like this (ie. as static mut I presume?). I can initialize them if needed on the first run of the start function perhaps or on declaration in the global sense ideally?

Thanks for any help.

I don't see how mutable references, or *mut pointers across FFI boundaries don't apply in this situation and are inferior to setting a global.

What you probably want to declare is something like this:

use std::sync::Mutex;

static SIM_STATE: Mutex<String> = Mutex::new(String::new());
static ON_OFF_SWITCH: Mutex<bool> = Mutex::new(false);
1 Like

Thanks again @jofas. You have been very helpful. I appreciate it. Do I not need to wrap them in Arc? That is what is shown here:

If so, would that become then:

static SIM_STATE: Arc<Mutex<String>> = Arc::new(Mutex::new(String::new()));
static ON_OFF_SWITCH: Arc<Mutex<bool>> = Arc::new(Mutex::new(false));

Either way, I presume I do not need mut in this case anymore now because I am not changing the "Mutex" after that but rather what it contains (ie. the bool or string inside)?

Arc is needed for shared ownership (i.e. when you need to pass something that requires to be 'static), which you don't need when you have a static global already satisfying 'static.

Arc::new is not const (it involves a heap allocation), so you can't put it in a static directly (which can only be initialised by a constant expression), you'd need to wrap the Arc in a LazyLock or similar.

Yes, that is what is known as interior mutability in Rust.

2 Likes

The approach that I would take is to start by writing a struct that encapsulates all of these different parameter types and exposes a simplified interface tailored to your specific application without any of the background handling:

pub struct Physics {
    rigid_body_set: RigidBodySet,
    collider_set: ColliderSet,
    // etc…
}

impl Physics {
    pub fn new(state: &str)->Self {}
    pub fn step(&mut self) {}
    pub fn get_state(&self)->String {}
    // Anything else you need…
}

Once you have this, you can write whatever tests you need to verify that it’s working properly and solve the problem of running it in the background separately, whether that’s by calling step on a static PHYSICS:Mutex<Physics> during the main game loop or starting up a background thread that you communicate with via atomics, Mutexes, or something like mpsc channels.

7 Likes

Thank you @2e71828 and again @jofas ! I am starting to see how this could be done. @2e71828 I think you just taught me how to make a Rust "class" :slight_smile:

Okay, so your suggestion @2e71828 would allow the locking and unlocking of the whole Physics struct (which would be the most general way to ensure safety on it).

To extend this concept, what if I want to instead of just having one static PHYSICS:Mutex<Physics> I want to have a dictionary of them? I actually need to run hundreds/thousands of these. The dictionary could be Mutexed as well, as adding or removing simulations from it needs to be safe too.

The idea of this would be permanent state is then held in:

STORAGE STRUCTURE:
//=Mutex for Dictionary (static)
//====Dictionary
//========Mutex for Physics 1
//============Physics 1
//========Mutex for Physics 2
//============Physics 2
//========Mutex for Physics 3
//============Physics 3

This might be then declared as:

static PHYSICS_DICT: Mutex<Dict<Mutex<Physics>>> = 
     Mutex::new(Dict::<Mutex<Physics>>::new());

And in usage we could have like you said for my "Physics Class" (as you suggested):

//PHYSICS "CLASS"
pub struct Physics {
    rigid_body_set: RigidBodySet,
    collider_set: ColliderSet,
    // etc…
}
impl Physics {
    pub fn new(state: &str)->Self {}
    pub fn step(&mut self) {}
    pub fn get_state(&self)->String {}
    pub fn destroy(&self)->
    // Anything else you need…
}

And in terms of my global functions to get things in/out and manipulate all this:

//ADD NEW SIMULATION
fn add_new_sim(sim_id: String) -> () {
    let new_physics_mutex = Mutex::new(Physics::new(sim_id)); //pass in sim_id perhaps 
    let dict_unlocked = PHYSICS_DICT.lock().unwrap();
    dict_unlocked.add (sim_id.to_string(), new_physics_mutex); //add to dictionary
}
//GET SIMULATION STATE
fn get_sim_state(sim_id: String) -> String{
    //no need to unlock dictionary mutex, read only, but I think I must:
    let dict_unlocked = PHYSICS_DICT.lock().unwrap();
    let physics_mutex = dict_unlocked.get(sim_id.to_string()).unwrap();
    let physics_struct = physics_mutex.lock().unwrap(); //get the actual physics sim now
    return (physics_struct.sim_state).to_string(); //hypothetically if stored as string inside struct
}
//ADVANCE A SIMULATION
fn advance_simulation(sim_id: String, delta_time: f64) -> (){
    //again need to unlock twice
    let dict_unlocked = PHYSICS_DICT.lock().unwrap();
    let physics_mutex = dict_unlocked.get(sim_id.to_string()).unwrap();
    let physics_struct = physics_mutex.lock().unwrap();
    physics_struct.advance_sim(delta_time);    //advance now
}

Is that a possible structure? Thus I lock the Dictionary Mutex when adding/removing Physics structs/objects from the dictionary. And lock each individual Physics sim when manipulating or reading that simulation.

The biggest problem with such a structure as I can see is the Dictionary Mutex MUST be unlocked in order to use or get anything inside. So this would rapidly create a bottleneck and force all the simulations into basically single threaded operation (when they don't need to be). I really only need to lock the Dictionary when I add or remove elements (and even perhaps then I can find a reasonably safe unsafe way to do this, though not sure how...). Thoughts?

I will need to do some testing. But I would just like to make sure this would even hypothetically work ...

Thanks again. :slight_smile:

Another idea I just thought of that would allow me to avoid the Mutex on the global dictionary (vector) would be to initialize it with a certain number of Mutex<Physics> objects inside and then simply not add or remove any after that. Then I don't need a Mutex on the vec/dict at all.

I could do something like this if possible to initialize 100 Mutex objects in a static vector for usage:

static PHYSICS_VEC: Vec<Mutex<Physics>> = vec![<Mutex<Physics>>::new(); 100];

Or if that's not possible (to prefill the Mutex objects on declaration), I could do:

static mut init_done_bool : bool = false;
static PHYSICS_VEC: Vec<Mutex<Physics>> = Vec::<Mutex<Physics>>::new();
fn init_physics() ->{

    unsafe{
        for i in 0..100 {
            let new_physics = Mutex::new(Physics::new()); 
            PHYSICS_VEC.add(new_physics);
        }
        init_done_bool  = true;
    }
}

Then I just need to wait until init_done_bool == true before running anything and access the Physics structs the same as above but based on their i in the vector. That would be the only unsafe piece.

This should allow safe multithreading then as the vector is completely read only from here out. I suspect though I need to make it:

static PHYSICS_VEC: Vec<Arc<Mutex<Physics>>> 

As now there will be multiple threads accessing each Mutex<Physics> potentially and there is no static directly on that Mutex.

I think this is what I will do. Thoughts?

Yes; this is a problem with Mutex<Collection<Mutex<…>>>, but there are a couple of similar alternative strategies that can work, such as:

  • RwLock<Collection<Mutex<…>>> will let you lock the inner Mutexes with only a read lock, which allows concurrent access. Adding or removing simulations will require everything to stop, though, so that you can get the necessary write lock.
  • Mutex<Collection<Arc<Mutex<…>>>> will let you ‘break out’ the inner Mutexes by copying their Arcs. This will let them run completely independently, but you’ll need to find some other way to communicate that they should stop— The corresponding service thread will be able to keep things going indefinitely regardless of what happens to the index.
  • You could also set up a system that transfers ownership of each simulation to a worker thread and set up a pair of channels for control/result information without sharing the simulation itself. That’s significantly more involved to set up but is likely the most flexible in terms of performance characteristics.

OnceCell/OnceLock implement this unsafe pattern for you, so that there’s no chance you can trigger UB from messing it up: They let you use a shared & reference once to initialize the value, and then will happily give out any number of shared references to their content afterwards.

2 Likes

Can you elaborate on this? Do you simply mean using this code as a library, or some sort of IPC or API? While sometimes using a static can simplify an API, I can't think of a context that would require them other than having to meet an existing API.

2 Likes

I am running this the Rust like this. So I am limited in what I can do. Async and threading are not good options. But simple Rust code is easy and effective.

Looks like the OnceLock options require thread spawning which at this point I'm not sure I can safely do. I tried to implement some Rust async but the outer program does not know when async finishes inside Rust as it invokes the functions so this is not great. Wouldn't work immediately at least. I don't think there is any implementation for that.

I can try the OnceLock, but realistically it doesn't make any practical difference. This unsafe is so simple it's impossible for anything to actually screw up.

Thinking about it further, I don't even need to lock the global initialization function with the unsafe bool. Since I am invoking this externally, and it is a synchronous function, I can just be sure it is run first and then I am safe.

So I think my best (simple) solution is this one for initialization:

static PHYSICS_VEC: Vec<Arc<Mutex<Physics>>> 
fn init_physics() ->{
        for i in 0..100 {
            let new_physics = Arc::new(Mutex::new(Physics::new())); 
            PHYSICS_VEC.add(new_physics);
        }
}

Then I would intentionally make my new function inside Physics impl empty (so it is essentially cost free). To actually initialize each Physics unit (as needed), I would then add something like unit_initialized = false and a separate real function for initialize_unit() inside each Physics struct/impl.

That way this global initialization costs me virtually nothing as it is just filled with structs of empty references and cheap primitives. No major work is then done on global initialization.

When I access each unit of the vector (unlock any given Arc<Mutex> to do something to it), I can put in checks to see if unit_initialized == true inside, and if not, run the true unit initialization (which will take up some more memory, so I only want to do on the units I am using).

The Physics.detroy() function would destroy all the objects that were created inside the struct from true unit initialization and set this unit_initialized bool back to false for future checks.

I think this is actually a pretty easy solution. If each Physics struct has say 12 members, at ~4 bytes per reference or primitive this is only 52 bytes or so (including the outer Physics reference), which means 52 KB for 1000 of them and 520 KB for 10,000. I strongly doubt I can run more than 10,000 on one system, and this is negligible CPU/memory cost to hardcode the global vector like this, so it seems gratuitous then to do anything more complicated.

Thanks for all your help everyone. I think I will try this approach and see what happens. :slight_smile:

(I Haven't been following this thread closely, and you edited the post while I wrote this, but anyway...)

The init_done_bool is was there because other threads are going to be checking it to see if initialization is done I take it? When you set it to true, that's a data race, which is UB.

You could use AtomicBool... but really, just use OnceLock. That's what it's their for.

That was my original instinct, but like I said, I realized I don't need it if I am now no longer changing the vector after creating it.

I am starting the whole system from a single thread process outside of Rust entirely, where as long as my init_physics() to add the Physics elements is a synchronous process (which it is), I can just run that first, and when that is done, I know I can proceed to use the vector and all other Rust functions I am building freely.

Nothing will be touching that global vector until I run that init_physics() function first, so there is nothing to worry about there. I just need the locking on each Arc<Mutex<Physics>> member of the vector as described. The vector will not change once initialized so there should be nothing more to do with it.

I will only be reading from the global vector to say get Arc<Mutex> i and then work on that at each moment. Given the vector is 100% read only then from that point forward there should be no problems. Right? I do not need protection against multiple threads trying to read the vector, do I?

My understanding is multiple threads can read something safely. It is only unsafe when you must write from multiple threads to an object, as per c++ - Can multiple threads access a vector at different places? - Stack Overflow

Simple and easy. Should work fine I think. :slight_smile:

I think you're looking for resource objects, looking at the docs. There's normally something like that in FFI situations, this one looks decently done

2 Likes

You don't need multiple writes for a data race. One write and one read will also be enough.

1 Like

It sounds like you shouldn't need a central index on the Rust side at all, and instead can let the Erlang side handle keeping track of the various simulations that are in use using the resource mechanism that @simonbuchan pointed out. After taking a quick look at one of the examples, you might want to consider organizing things like this:

struct Physics {
    // as before
}

impl Physics {
    // pure Rust API for a single simulation
}

struct PhysicsResource(Mutex<Physics>);
#[rustler::resource_impl] impl Resource for PhysicsResource {}

#[rustler::nif]
new(env:Env, ...) -> ResourceArc<PhysicsResource> {
    let physics = Physics::new(...);
    ResourceArc::new(PhysicsResource(Mutex::new(physics)))
}

#[rustler::nif]
step(env:Env, physics: ResourceArc<PhysicsResource>, dt:f64) {
    let inner = physics.0.lock().unwrap();
    inner.step(dt);
}

// etc...

This would be more idiomatically done in Rust by storing Option<Physics> inside the mutex and initializing it to None. This will prevent you from accidentally forgetting to check the initialization flag and thereby accessing uninitialized values.

Yeah but again, nothing is writing after it is made.

Oh very good. I will consider that - using Option<Physics>. Yes, I have seen that is how Rust does nulls, with "Option = None". Easier perhaps over time.

Try 1: Static Mut Vec

Of interest, my first try was to create the static vector as:

static mut PHYSICS_VEC: Vec<Arc<Mutex<Physics>>> = Vec::new();

fn init_physics_vec(target_size: i64)-> String { 
    unsafe{
        while (PHYSICS_VEC.len() as i64) < target_size { 
            let i:i64 = PHYSICS_VEC.len() as i64; 
            let new_physics = Physics::new(i);
            let wrapped_physics = Arc::new(Mutex::new(new_physics)); 
            PHYSICS_VEC.push(wrapped_physics);
        }
        let final_size = PHYSICS_VEC.len();
        return format!("physics vec built, size: {final_size}"); 
}

This seemed to work. Again, I would never access the vector until this is COMPLETE and would not change it after, so I know this would be safe. The problem is I was getting annoyed by it already. Any time I try to access the vector, Rust was spamming me to use unsafe on that code, and I would have to ugly up all my code with unsafe for every access due to the static mut.

Rust is the most paranoid language I have ever seen. I bet the creators had OCD and every pencil is always lined up perfectly straight on their desks. :smiley:

On the other hand, I can avoid this if I hardcode the creation to a certain amount. As noted this is fine with me (and I can initialize to None with Option as noted so there is no significant cost to making it potentially "TOO BIG" on creation).

Try 2: Static Vec (failed)

So I tried:

static PHYSICS_VEC: Vec<Arc<Mutex<Physics>>> = vec![Arc::new(Mutex::new(Physics::new(0))); 10]; 

However this doesn't work. It tells me:

note: calls in statics are limited to constant functions, tuple structs and tuple variants
= note: consider wrapping this expression in std::sync::LazyLock::new(|| ...)
= note: this error originates in the macro vec (in Nightly builds, run with -Z macro-backtrace for more info)

Try 3: LazyLock (success)

So I tried using ChatGPT to figure out the LazyLock solution (as that was mentioned above by @jofas) and shockingly ChatGPT was very good. First time I've used it like this. After a prompt and some editing for me to put in i, I ended up with:

static PHYSICS_VEC: LazyLock<Vec<Arc<Mutex<Physics>>>> = LazyLock::new(|| {
    (0..100) 
        .map(|i| Arc::new(Mutex::new(Physics::new(i))))
        .collect()
});

And this does seem to work with no unsafe warnings. The LazyLock also doesn't seem to actually be part of the final type, right? I mean the end result seems to still be Vec<Arc<Mutex<Physics>>>. I can access it like this without any modifications so it must be (or the LazyLock is otherwise irrelevant after the completion in some manner):

    let index_unit: Arc<Mutex<Physics>> = Arc::clone(&PHYSICS_VEC[i_index as usize]); 
    let unit_unlocked = index_unit.lock().unwrap();
    return unit_unlocked.get_unit_i();

Is that correct for how to get the elements from the vec and run functions on them? Seems to work ...

Explaining LazyLock with (Creepy) ChatGPT

Then I wanted to know more, so I asked ChatGPT: "what is the meaning of || in lazylock creation in rust?"

And it told me:

In Rust, the || syntax is used to define a closure. A closure is an anonymous function that can capture its environment. In the context of creating a LazyLock, the || indicates that the closure takes no parameters.
LazyLock::new: This function expects a closure as its argument. The closure will be called to initialize the value the LazyLock will store.
|| { ... }: This is the closure itself. The empty parentheses || indicate that it takes no parameters. Inside the braces { ... }, you write the code to initialize the Vec<Arc<Mutex>>.

Then I asked: Does writing (0..100) map to i in rust?

And it told me:

in Rust, when you write (0..100).map(|_| ...), the |_| syntax is defining a closure that takes one argument, which is represented by the underscore _. This means that the argument is ignored in the body of the closure.
In this case, i will take on each value from the range 0 to 99 as the map function iterates through it.

That was fucking weird man. :smiley: I feel helped but also... My skin is crawling a bit. :grin:

LazyLock with for loop? Possible?

I am still a bit curious though. When I tried:

static PHYSICS_VEC: LazyLock<Vec<Arc<Mutex<Physics>>>> = LazyLock::new(|| {
    for i in (0..100) {
        .map(|i| Arc::new(Mutex::new(Physics::new(i))))
        .collect()
    }
});

This obviously doesn't work. But I would rather such a structure if possible just for a more language-generic looking style. Is there a way to write this same LazyLock with a for loop? Just for interest? Chat GPT wanted me to then use when I asked:

[dependencies]
lazy_static = "1.4"
tokio = { version = "1", features = ["full"] }

use std::sync::{Arc, Mutex};
use lazy_static::lazy_static;

struct Physics {
    value: usize,
}

lazy_static! {
    static ref PHYSICS_VEC: Vec<Arc<Mutex<Physics>>> = {
        let mut vec = Vec::new();
        for i in 0..100 {
            vec.push(Arc::new(Mutex::new(Physics { value: i })));
        }
        vec
    };
}

But I am curious if the existing code can be modified without those dependencies to use a for loop. Or is the existing LazyLock code the only way to do this? Fine if so. Just curious for learning.

Thanks for any thoughts.