Are Async Hooks Idiomatic in Rust

Me and my friend are working on a Discord framework, it’s event driven so the flow would be just init Context and handle events as they come in

Now I have two questions, first is how to store Context, Arc and static are my only viable options but since it should live as long as the program would static be better with OnceCell or something? It’d also mean they don’t have to put a ctx parameter on every single function

Second is how to actually handle events:

  • methods on Contexts such as ctx.on_some_event(my_some_event_handler)
  • or having a stream of events and having them spawn a thread on each, match on the event, and run their own functions
  • or having them provide a struct that implements a trait with optional methods for each event with ctx.event_handler(MyEventHandler) and impl EventHandler for MyEventHandler { fn on_some_event(some_event: SomeEvent) …
  • something else you can suggest

We want it to be as little boilerplate as possible but we don’t want it to be unidiomatic, so what are your suggestions

You might want to look at how axum handles passing state and registering handlers. In my opinion it's the async HTTP framework that comes closest to feeling genuinely idiomatic.

(Note I linked to the 0.6.0 release candidate not the currently released version because it has some changes to make accessing state in routes a little easier).

When working with events it's typical that everything is wrapped in Arc.

Event handlers could be FnMut(EventType), which you'll have to store as Box<dyn FnMut(EventType)>. Closures can hold arbitrary context they need, so you don't need to pass C-style user context args.


I would say you should very strongly avoid using callbacks, at least directly - you will be doing nearly nothing but fighting the borrow checker.

Instead, I suggest using a channel to send the events you get into an event processor:

enum Event {

let (tx, rx) = std::sync::mpsc::channel(); // there are better ones, e.g. crossbeam

// or better, whatever ctx would be wrapping
ctx.on_foo({ let tx = tx.clone(); move || tx.send(Event::Foo) });
ctx.on_bar({ let tx = tx.clone(); move |value| tx.send(Event::Bar(value)) });

while let Some(event) = {
  match event {
    Event::Foo => { ... }
    Event::Bar(value) => { ... }

On the plus side, this lets you send the senders (the txs) to other threads with little trouble.

So if I follow the .route() pattern it’d be like ctx.handler(Event::SomeEvent, handle_some_event)? Also in this case handle_some_event would be async, I’m not sure how complicated that’d make things

How would the context be accessible? Would it be a field of Event?

Ah so my stream of events idea but using an event processor? Is there a reason the latter is better than the former? In this case rx and tx would be fields of the Context right?

When every handler is a closure, it can access whatever it wants by capturing variables. It's the same principle as std::thread::spawn that provides no explicit context to the thread.

1 Like

So first, I think I misread your description, thinking on_some_event() is the already implemented external event you have to handle, but I think the advice still applies so long as you're careful to avoid confusing the external events (like, say, "user connected") with internal events, replacing global mutation (like, say, "add user to room", "send message to user") - perhaps use Command for the latter.

If you did have a context (in the sense of a thing you pass to everything), it would only have a sender, while the processor (which could just be a loop in main) only has a receiver. The point here is to avoid having to have a context though, and give out senders (ideally wrapped up so the actual mechanism is hidden) to whoever needs to send events/commands (instead of mutating state directly), and your receiver can stay in the processor at the top of your app and mutate anything it likes.

It's not the only way to build this, but it's a pretty natural way to avoid having to handle mutation from anywhere, which solves a lot of headaches.

The only real trick is to avoid having any long lived references, though: doing that essentially requires interior mutation (RefCell and the like), this is just a good way to avoid that by decoupling detecting events and applying responses - which is why having an event loop is so common.

You can instead do things like have a context (or several) with a bunch of mut references you pass around and (potentially) throw away when you're done, but that tends to be more specialized (renderers, for example). You need to be careful with that, as you can quickly trap yourself if you don't get the things you have references to split up correctly, as while you have that context, nothing can otherwise be able to reference any of those objects or anything in them (without interior mutability, which you should try to avoid)

The short version of the general advice is event driven applications are just hard to write ad-hoc logic for correctly without some careful design (largely because they have to keep state around between events), and Rust makes things that are hard to write correctly much harder to write in general. Try to avoid fighting the language, and you're going to be doing that if you try to pass something that can mutate any part of your application to any part of your application.

Firstly I can't avoid having a context cause it wraps over the HTTP client, event emitter (which you have to send commands to sometimes), the cache (namely the sqlx's PostgresPool) and most likely more as they come in..I don't understand why mutation is required though, the general workflow of the program for users is using the event data (completed with the cache by the library, cahce bad yes but no other choice here since it's not stateless) to understand what they should do (usually an HTTP request mostly abstracted with by library) and that's about it, as such there's barely any place where the event data and context isn't required.

Well, that's what I meant by:

If you are passing a connection, cache etc., to everything, that seems a bit off: The point of using a channel is that the majority of your application wouldn't need to talk to all of that, only to send commands to the central processor which would.

But it sounds like you're doing something pretty different to what I'm thinking of anyway with "users of the program" - this sounds like a library crate abstracting an API, in which case you shouldn't be trying to handle events at all if you can help it, other than wrapping them up to be more palatable.

An application framework crate generally just takes a bunch of config and then has a run method as it's API - the focus is on what's most natural for users handling the events, not the implementation. There's a lot of different options there and nothing I would say is at "idiomatic" level from what I've seen. Check out winit, rocket and bevy for some nicer designs, all with different requirements.

I would suggest you split this into the library, which just types incoming and outgoing requests, and the framework, which focuses on making it easy to use the library correctly (and hard to use it incorrectly!) without needing to worry as much about covering every edge case. Things like caching could probably be on either side, or as an additional library, depending on how you feel about it.

It is mostly an abstraction yeah, they're all together in context because they mostly have to be, the general flow of the program is, receive event, deserialize it, update the cache with it, return the cached event (the cached version because it's more complete than the bare event), then the user does whatever they want with it which mostly involves making an HTTP request

I don't really understand why it's necessary to the Context up, I think it's a nice abstraction over everything and winit comes closest to how event-driven this framework is, what it seems to do is having the event handling function be passed to it, along with a control flow argument, this seems a lot like just abstracting over a loop which I don't think is necessary, also I don't know if passing an async fn as a parameter is possible in stable yet

As for splitting the crate up, 99% of the cases they'll need all of these crates (If they don't they should probably look for another crate), so I don't know why that's necessary either

Winit uses a callback essentially because it's required by the use case of a UI loop: you have to respond synchronously for it to know when you're done and it can repaint, among other reasons. If you don't have that requirement, a stream of events the user can pull from at their leasure is a far more flexible and simple to use API - for a trivial example, how do you use winit and your library together?

For the context case, I was really only saying if you're following the pattern on only handing out a channel sender that commands can be pushed to, then you don't need to hand out the http connection and cache, etc. This has the effect that you can easily clone and store that sender, unlike the context.

Be careful to remember I'm not trying to tell you how your project should work: I'm not the one writing it, I don't have the knowledge about it you do, and so I'm trying to talk in generalities about Rust, so this is merely what kinds of API are easier or harder to write and use in Rust in general. That's as close to the topic as I can get, because to my knowledge there's not any strong idiomatic rules in Rust about dealing with async yet, only things we know are painful.

1 Like

so you mean separating the senders from the context? looks like a good idea to make ctx less bloated too, is anything wrong with using async streams tho? so people do let event =;

It can result in implicit buffering fairly easily, which can be a problem in some cases, but so long as you aren't contorting the library to make it happen it's nicer, yes!

1 Like

I guess we’ll go with a channel library, do you have any recommendations?

If you want an async channel, your runtime probably comes with one, eg tokio::sync - Rust, and there's also the generic futures::channel - Rust

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.