How does async differ from a Rc<dyn Fn () -> T>?

I'm reading , which makes async sounds like a big deal. Quoting the post:

An async function, which you can introduce by writing async fn instead of fn , does nothing other than to return a Future when called. This Future is a suspended computation which you can drive to completion by .await ing it. Besides async fn , async { ... } and async move { ... } blocks, which act like closures, can be used to define "async literals".

My question here is -- how does a Future differ from a Rc<dyn Fn() -> T> , where instead of calling .await, we merely invoke the closure with () ?

1 Like

Regular function keeps running continuously until it returns, and this means it needs to have a thread and a stack, for the entire duration of the function call, even while it calls other functions or waits for something.

async functions can be "paused", and while they're paused they don't use any threads or stack. Behind the scenes they're not really functions that you call, but objects that can be asked many times "are you finished yet?". For many many tasks, especially network-bound, it's easier to deal with objects that represent them than with threads and functions waiting on them.

It's sort of like the difference between for {} loop that runs until completion vs Iterator that you can call next() on whenever you want.


@kornel : I don't doubt the accuracy of your statements. However, this has increased the number of questions. Two things that would greatly help me see the difference would be

(1) is there a classical example of something that is easy to implement with async/await but messy to implement with "just return a closure representing 'rest of computation' " ?

(2) is there a high level doc on how async/await is implemented? it seems like understanding how the object mechanically works will also shed light on what async/await can easily do that "return a continuation represented as a closure" can't easily do

Throwing out more questions -- how does async/await compare to Go's go-routines ?

Returning a closure representing the rest of computation is almost the same, but in practice requires a separate allocation for each returned closure (step). Instead of a new closure per each step, Rust builds equivalent one big closure with a match statement internally for each step, so there's only one allocation instead of many.


Goroutines try to work as if they were regular blocking code with a stack for each goroutine (green thread), and rely on standard library function calls to pause them nicely instead of actually blocking the OS thread. To keep memory usage low, they use segmented stacks (tiny stack that grows as needed). C is not compatible with segmented stacks, and calls to blocking functions that don't cooperate with the green threads cause problems for all goroutines.

Rust used to have "goroutines" before 1.0, but it was removed from the language. Rust didn't want overhead of swapping stack when calling into FFI, and didn't want to rely on special standard library/runtime to handle goroutine multitasking.

Futures can make progress using regular threads and stack, so their code doesn't need special environment. Future objects contain all the state that needs to survive between steps, so the runtime doesn't need to do any magic stack swapping to preserve their state.

Futures still require their own runtime (executor) and compatible async I/O functions to work gracefully, but that is just a library, not tied to the language runtime, so non-async code doesn't need to pay the cost of it.


If you are up for the long story Jon Gjengset has a four hour long presentation on "The What and How of Futures and async/await in Rust" here:

He give what I believe to be a pretty good description of what it is all for an how it works under the hood.

As someone succinctly put it:

Threads are for doing lots of work at the same time.

Async is for waiting on lots of things at the same time whilst doing nothing.

Or something like that...


This sounds a lot like Clojure's core.async:

Is the following mental model correct:

When using rust's async, we take a function, chop it up at "breakpoints" where it can yield / wait on / ... .

Then we "invert" the function / compile it into a giant match statement

pub enum StateForSomeFunc {
  BreakPoint1 { state needed to continue executing breakpoint 1},
  BreakPoint2 { state needed to continue executing from breakpoint 2},

There's some Finite-Automata like setup where breakpoints, after some execution, can jump to other breakpoints. Then, when we execute the async func, it jumps around the breakpoints.

Is this the right mental model?


That is indeed the birds eye view or what is going on.

The essential problem is how to do this (in pseudo code)

loop forever
    read keyboard
    do something with keyboard input

loop forever
    read network stream
    do something with network input

loop forever
    wait for some time
    do some timed processing

Where all of those reads and waits hang up (block) the flow of execution until some data arrives or event happens.

That structures our code nicely but how do we get on with the second and third loops when we are stuck in the first one?

Traditionally we would do this by running each of those loops in a thread. Then our OS kernel will take care of changing context from one thread to another, when some data or event happens the relevant thread becomes ready to run and program flow is directed there by the kernel. When a loop is blocked waiting for data the kernel can run other threads instead if they are ready.

This threading requires a lot of state to be maintained by the kernel for each thread, the program counter, the stack, etc. Which is wasteful. It also requires a lot of time for swapping conntect from thread to thread.

"green threads", "goroutines" etc are a way to do this threading within the application, without the heavy weight mechanism of kernel threads. Smaller stacks, less processor state saved, no kernel calls, whatever I don't know exactly.

The whole async thing tackles this by chopping what looks like regular code up into lots of pieces that will get called as and when events happen so that the pieces can process the data arriving with that event.

Of course this means that if you do actually write

loop {

In your code you will not just have hung up that particular async "thread" but likely your entire program!
Unlike the situation with real threads.

Someone correct me of I'm way off the mark in this description.

1 Like

Does this means that all the "breakpoints" / yield / waits on an async function have to be in the LITERAL source code of the async function?

I.e. we have to do:

async fn foo() {
  breakpoint for foo;

but we can't do

async fn bar() {
  breakpoint for foo;

async foo foo() {

So my reasoning for this is that if (1) we aren't carrying a light weight stack around and (2) we are compiling everything down to a giant match, this doesn't seem to play well if a "breakpoint" is in a function we call.

Instead, this seems to require that all "breakpoints" be directly in the source code of the async fn.

Is this limitation correct?

Perhaps "break points" is not quite the right way to visualize it. It's not as if there is some mark made in your code that is a point to resume at some time later. As would be the case with threads where the value of the program counter is saved as a point to get back to.

Rather when you call an async function it does actually return immediately, no blocking, no rescheduling. However it has not actually done anything much except return a "future" object that will hold the data you want at some point in the future, and it has told something, somewhere, somehow to fill that future in when the data arrives.

See description here:

And this presentation:

If the breakpoint is in an async function call, then it would look something like having one of the states in the outer enum contain a field storing the state machine of the inner future, and when calling it, passing control on to the inner state until it says it's done.

enum Future1 {
    State1 { ... }
    State2 { inner: Future2, ... }
enum Future2 {
    State1 { ... }
    State2 { ... }

@alice : Thank you, this is very informative.

  1. (being pedantic -- rustc complains about this all the time) -- since Future2 might refer back to Future1, to avoid infinitely sized enums,
State2 { inner: Future2, ... }

would actually have to be:

State2 { inner: Boz<Future2>, ... }
  1. If we take a step back, is this essentially "faking stack frames via nested Enums" ? If we squint a bit, each enum Future* is basically a stack frame + pointing to particular point in the function.

No there's no Box. That's the whole idea. Futures require pinning to be called, so fields referencing other fields are OK inside futures because of compiler magic.

And yes, they're faking having a stack frame, but it isn't actually creating stack frames and moving them around, which is why it works.

Of course there are some hazards with recursive functions, but you just get errors.


Thanks for correcting my misunderstanding regarding Box / Pin on Futures.

The link you provided talks about a function recursively calling itself.

There's something I still don't understand. We have a bunch of async-fns.

Are these functions allowed to recursively call each other (as long as they do not call themselves), i.e.
foo() calls bar(), bar() calls cat(), cat() calls foo()

or is the "which function does this function" graph of "async fns" required to form a directed acyclic graph?

EDIT: Part of this confusion is that I don't understand how "Pin" gets around the "recursion" => "infinitely sized Futures" problem.

Sorry, I was a bit inaccurate. Pinning doesn't help with loops and you will get a compiler error if your call graph is not a DAG.

1 Like

Yeah, recursive calls don't work without a level of indirection. Since future combinators wrap each other similarly to iterators, the future that represents a recursive call would be a recursive type, and that becomes infinite without a level of indirection: Future {next: Future { next: Future {….

In my current mental model, I still don't understand how we can have breakpoints in "sub functions."


async fn foo
fn n0
fn n1
fn n2
async fn bar

now, suppose foo calls n0, which calls n1, which calls n2, which calls bar.

In this case, just having the Future on foo and the Future on bar doesn't seem enough, because we also need the info "when function call bar returns, which line do we resume on for n2; when n2 returns, where do we resume on for n1; when n1 returns, where do we resume on n0"

While you can call bar from n2, doing so doesn't do anything until the future returned by bar is awaited. Since you cannot await a future from a non-async function, you would have to either block on it, somehow return it back into foo so it can be awaited there or perhaps you could spawn it on the executor of foo and return immediately.

Note that blocking on bar inside n2 would be a very bad idea as it would use up a whole thread on the executor running foo, as foo is not able to pause while it's waiting for bar.

1 Like

It is possibly best to view "await" as your breakpoints, Your required to make the nest of functions all be async and each call to include await. (More complex spawn approach exits as @alice mentions.)

I think first you need to be thinking that any "async fn" isn't running the body but instead returns a structure that implements Future. Which is almost equivalent of a regular function that returns a closure.