How does async differ from a Rc<dyn Fn () -> T>?

If you are up for the long story Jon Gjengset has a four hour long presentation on "The What and How of Futures and async/await in Rust" here:

He give what I believe to be a pretty good description of what it is all for an how it works under the hood.

As someone succinctly put it:

Threads are for doing lots of work at the same time.

Async is for waiting on lots of things at the same time whilst doing nothing.

Or something like that...


This sounds a lot like Clojure's core.async:

Is the following mental model correct:

When using rust's async, we take a function, chop it up at "breakpoints" where it can yield / wait on / ... .

Then we "invert" the function / compile it into a giant match statement

pub enum StateForSomeFunc {
  BreakPoint1 { state needed to continue executing breakpoint 1},
  BreakPoint2 { state needed to continue executing from breakpoint 2},

There's some Finite-Automata like setup where breakpoints, after some execution, can jump to other breakpoints. Then, when we execute the async func, it jumps around the breakpoints.

Is this the right mental model?


That is indeed the birds eye view or what is going on.

The essential problem is how to do this (in pseudo code)

loop forever
    read keyboard
    do something with keyboard input

loop forever
    read network stream
    do something with network input

loop forever
    wait for some time
    do some timed processing

Where all of those reads and waits hang up (block) the flow of execution until some data arrives or event happens.

That structures our code nicely but how do we get on with the second and third loops when we are stuck in the first one?

Traditionally we would do this by running each of those loops in a thread. Then our OS kernel will take care of changing context from one thread to another, when some data or event happens the relevant thread becomes ready to run and program flow is directed there by the kernel. When a loop is blocked waiting for data the kernel can run other threads instead if they are ready.

This threading requires a lot of state to be maintained by the kernel for each thread, the program counter, the stack, etc. Which is wasteful. It also requires a lot of time for swapping conntect from thread to thread.

"green threads", "goroutines" etc are a way to do this threading within the application, without the heavy weight mechanism of kernel threads. Smaller stacks, less processor state saved, no kernel calls, whatever I don't know exactly.

The whole async thing tackles this by chopping what looks like regular code up into lots of pieces that will get called as and when events happen so that the pieces can process the data arriving with that event.

Of course this means that if you do actually write

loop {

In your code you will not just have hung up that particular async "thread" but likely your entire program!
Unlike the situation with real threads.

Someone correct me of I'm way off the mark in this description.

1 Like

Does this means that all the "breakpoints" / yield / waits on an async function have to be in the LITERAL source code of the async function?

I.e. we have to do:

async fn foo() {
  breakpoint for foo;

but we can't do

async fn bar() {
  breakpoint for foo;

async foo foo() {

So my reasoning for this is that if (1) we aren't carrying a light weight stack around and (2) we are compiling everything down to a giant match, this doesn't seem to play well if a "breakpoint" is in a function we call.

Instead, this seems to require that all "breakpoints" be directly in the source code of the async fn.

Is this limitation correct?

Perhaps "break points" is not quite the right way to visualize it. It's not as if there is some mark made in your code that is a point to resume at some time later. As would be the case with threads where the value of the program counter is saved as a point to get back to.

Rather when you call an async function it does actually return immediately, no blocking, no rescheduling. However it has not actually done anything much except return a "future" object that will hold the data you want at some point in the future, and it has told something, somewhere, somehow to fill that future in when the data arrives.

See description here:

And this presentation:

If the breakpoint is in an async function call, then it would look something like having one of the states in the outer enum contain a field storing the state machine of the inner future, and when calling it, passing control on to the inner state until it says it's done.

enum Future1 {
    State1 { ... }
    State2 { inner: Future2, ... }
enum Future2 {
    State1 { ... }
    State2 { ... }

@alice : Thank you, this is very informative.

  1. (being pedantic -- rustc complains about this all the time) -- since Future2 might refer back to Future1, to avoid infinitely sized enums,
State2 { inner: Future2, ... }

would actually have to be:

State2 { inner: Boz<Future2>, ... }
  1. If we take a step back, is this essentially "faking stack frames via nested Enums" ? If we squint a bit, each enum Future* is basically a stack frame + pointing to particular point in the function.

No there's no Box. That's the whole idea. Futures require pinning to be called, so fields referencing other fields are OK inside futures because of compiler magic.

And yes, they're faking having a stack frame, but it isn't actually creating stack frames and moving them around, which is why it works.

Of course there are some hazards with recursive functions, but you just get errors.


Thanks for correcting my misunderstanding regarding Box / Pin on Futures.

The link you provided talks about a function recursively calling itself.

There's something I still don't understand. We have a bunch of async-fns.

Are these functions allowed to recursively call each other (as long as they do not call themselves), i.e.
foo() calls bar(), bar() calls cat(), cat() calls foo()

or is the "which function does this function" graph of "async fns" required to form a directed acyclic graph?

EDIT: Part of this confusion is that I don't understand how "Pin" gets around the "recursion" => "infinitely sized Futures" problem.

Sorry, I was a bit inaccurate. Pinning doesn't help with loops and you will get a compiler error if your call graph is not a DAG.

1 Like

Yeah, recursive calls don't work without a level of indirection. Since future combinators wrap each other similarly to iterators, the future that represents a recursive call would be a recursive type, and that becomes infinite without a level of indirection: Future {next: Future { next: Future {….

In my current mental model, I still don't understand how we can have breakpoints in "sub functions."


async fn foo
fn n0
fn n1
fn n2
async fn bar

now, suppose foo calls n0, which calls n1, which calls n2, which calls bar.

In this case, just having the Future on foo and the Future on bar doesn't seem enough, because we also need the info "when function call bar returns, which line do we resume on for n2; when n2 returns, where do we resume on for n1; when n1 returns, where do we resume on n0"

While you can call bar from n2, doing so doesn't do anything until the future returned by bar is awaited. Since you cannot await a future from a non-async function, you would have to either block on it, somehow return it back into foo so it can be awaited there or perhaps you could spawn it on the executor of foo and return immediately.

Note that blocking on bar inside n2 would be a very bad idea as it would use up a whole thread on the executor running foo, as foo is not able to pause while it's waiting for bar.

1 Like

It is possibly best to view "await" as your breakpoints, Your required to make the nest of functions all be async and each call to include await. (More complex spawn approach exits as @alice mentions.)

I think first you need to be thinking that any "async fn" isn't running the body but instead returns a structure that implements Future. Which is almost equivalent of a regular function that returns a closure.

Perhaps this would be a good example:

async fn foo() {
    // n1 is not async, so no await needed

fn n1() {

fn n2() {
    // you cannot use await here
    println!("Got to n2");

async fn bar() {
    println!("Got to bar!");


Try running the above. Notice how bar doesn't print anything like you might expect it to. Instead you just get this warning:

warning: unused implementer of `core::future::future::Future` that must be used
  --> src/
11 |     bar();
   |     ^^^^^^
   = note: `#[warn(unused_must_use)]` on by default
   = note: futures do nothing unless you `.await` or poll them

The warning correctly informs you that the future returned by bar is immediately dropped, but you have to poll it somehow before it calls println. The easiest way to poll a future is to await it from another async function, but n2 is not async, so that is not possible in this case.

That is a sweet example.

However I have no idea where I would go with it.

Could you possibly, maybe, expand on it into a most minimal example of waiting on a couple of futures. Say just simple timeouts that print something when done.

That might give me the confidence to look into this whole async/await rabbit hole further.

I think this is a good point for me to stop making bad analogies and start reading documentation.

There seems to have been many proposals floating around (and lots of outdated info?).

Is there a definitive documentation / examples on how the current async/await works ?

Did you check those videos I liked to above already? They are recent and seem to be describing the state of the art.

pending macro is what I would describe as just a breakpoint. Futures rely on wake to be useful. No idea if there is a practical use of this macro.

async fn foo() {
fn main() {

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.