Moving context into several closures?

I have found a way to move context into several closures, but it looks ugly. I do it with help of Rc and cloning each variable I need to use for each closure. Particularly I don't like to clone every variable for every closure I want to use:

  let mut context = Rc::new( Context { a : 13 } );
  ..

  let context_clone_1 = Rc::clone( &context );
  engine.on_event1( Box::new( move ||
  { 
    println!( "on_event1 : {}", context_clone_1.a );
    ...

  let context_clone_2 = Rc::clone( &context );
  engine.on_event2( Box::new( move ||
  {
    println!( "on_event1 : {}", context_clone_1.a );
    ...

It is an extensive way to go and I feel there must be a better way to do it?
Also, uncommenting line // context_clone_1.a += 1; breaks the program.
What is the proper way of solving problems like this in Rust?

Here is playgroung with minimal code.

If you want to avoid this kind of stuff, you need to stop using callbacks. If you must use callbacks, Rc is more or less unavoidable. That said, I've seen some people build macros that can insert the clones for you.

To enable modification of the shared value, you can use Cell or RefCell either around the entire struct or the fields of the struct.

1 Like

Alice, thank you, for the clarification that I am on right track. I can't stop using callback for the mini-project I have. Which macros?

You could pass the context as an argument to the closure, then the engine would be responsible for passing it to the closure

2 Likes

First, for the playground you have, it's relatively easy to avoid callbacks. For example:

fn main() {
    let mut context = Context { a: 13 };
    let mut engine = Engine::new();

    loop {
        match engine.next_event() {
            EventType::Event1 => {
                context.a += 1;
                println!("on_event1 : {}", context.a);
            },
            EventType::Event2 => {
                println!("on_event2 : {}", context.a);
            },
        }
    }
}

pub struct Context {
    a: i32,
}

pub enum EventType {
    Event1,
    Event2,
}

pub struct Engine {
}

impl Engine {
    pub fn new() -> Engine {
        Engine {}
    }
    
    pub fn next_event(&mut self) -> EventType {
        todo!()
    }
}

As for how to build the macro, here's an example:

macro_rules! with_clones {
    ($($clone_me : ident),* ; $body:block) => {
        {
            $(
            let $clone_me = Rc::clone(&$clone_me);
            )*
            Box::new(move || {
                $body
            })
        }
    }
}

fn main() {
    let mut context = Rc::new(Context { a: 13 });
    let mut engine = Engine::new();

    engine.on_event1(with_clones!(context; {
        // context.a += 1;
        println!("on_event1 : {}", context.a);
    }));

    engine.on_event2(with_clones!(context; {
        println!("on_event2 : {}", context.a);
    }));

    engine.listen();
}

This macro will allow you list variables to clone separated by commas. Then a semicolon followed by the contents of the function follow.

3 Likes

Fantastic, Alice! Thank you for the macro =)

Also, interesting idea with the loop..

In my opinion, a loop with a match on an event type often produces nicer code in Rust, because it integrates more nicely with the ownership model of Rust. You don't need to mess with shared values like what callbacks force you to.

2 Likes

@alice is not it suboptimal to have a match for each instance of an event? Let's say I have a thousand types of events and only a few types have registered callbacks. Is not it suboptimal to put every event in the queue and use match?

It's not much different than registering a thousand event listeners just above the loop is it? If you only need to handle a few events, you can simply add an _ => {} case to the match.

Registering of events happens once on setup. Passing through the match with 1000 options 100 times a second is costly.

No more costly then dispatching over registered event listeners. Probably even less costly, if the match can be optimized at compile-time.

1 Like

Without IO, cache miss is the common source of slowness. Box can easily triggers it, especially if you have lots of them.

But never believe human brain on performance. They're awful on guessing it. Always measure it with real machine, with proper tools like criterion.

2 Likes

Well it depends on how the event listeners are triggered, but the match probably compiles down to some sort of binary search, so finding the right branch takes 10 comparisons if you have 1000 branches. It may even compile to a jump table, in which case it would take a single array lookup and a goto.

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.