Is it possible to de-sugar `async` with normal Rust syntax?

As title. In each async function there can be deeply nested .await. Each .await will block the thread and keeps pooling the awaited future until it resolves, and then the async function will "resume". I'm wondering If it's possible to implement this "resume" through the current syntax? Namely, how can we de-sugar the async function and turn it back to a normal future object (given the function body)?

I assume the .await part is relatively easy as it's just polling the future, as in the await macro. Also I saw the yield keyword which is not covered in stable Rust. That seems related to my question? Also saw this, about the unstable feature generator.

It looks like async/.await can be de-sugared with the support of generator/yield? But now since those are unstable. We can not de-sugar async/.await easily?

No, you can't easily desugar it.

async can create Future objects that are self-referential structs, and this is the only place in safe Rust where this is allowed.

It ends up being a state machine that uses unsafe pointers. It's not a straightforward translation, and there's no equivalent high-level syntax.

That unique complexity is the reason why .await is a built-in syntax. Otherwise it would be a crate with a proc_macro.


I'd agree with @kornel on why we can't desugar to regular rust code. But as you mentioned, there are also generators.

Today, we can desugar async fn/await to generators/yield, but only in nightly. Generators are unstable for one reason: they're an implementation detail of async/await.

Generators were added to the compiler specifically to be a lower level representation of async fns, and as such, the syntax was rushed, and they weren't designed to be stabilized. async fns were deemed an important, useful addition to the ecosystem, and got attention & stabilization. Generators, while possibly useful, were much less critical to the ecosystem, and thus have not yet been scrutinized nor stabilized.

If there was a large part of the ecosystem which needed non-async fn generators, then the syntax design would have been worked on as much as async fn, and it would have been stabilized. But the reality is that there isn't that large need. There are niche uses, but even for those it's possible to rebuild generators on top of async fn without much work (see genawaiter, generators built on top of async fn).

To turn your question around, why do you want to desugar async/await? I can understand wanting to get to the lowest level possible, but async fn really is fairly low. It has no reliance on any particular executor, and is designed to be as general as possible while still being useful.

1 Like

Got it. If straightforward translation is not trivial, is it possible to have some high-level understanding (as in a more detailed way than the phrase "state machine", but still "high-level" more or less). Are the following on the right track?:

  • Each async fn will be translated to a struct that impl Future.
  • The function body determines how the poll is defined (and Target of course, but that's a minor point in this discussion).
  • The stack variables (function local variables) in the function will be stored as some internal fields in the struct (or some helper structs).
  • Each ".await" will create a "resume point", this is more-or-less equivalent to "go to" in C/C++, but not directly supported in stable Rust syntax.
  • Each ".await" will also result in a "state change" of the function, where the state is denoted by a State Enum (also a helper enum). Next time the poll is called, it will go to the right execution point based on the recorded state. (So the translated poll method will start with a state match.)

Got it. I'll take a deeper look at generator then.

The reason I want to "de-sugar" async/.await is to try to understand the performance implications. I want to know how it works under the hood so I can get a sense of the cost. For example, I know roughly how Future works so I am aware of the executor / wake function / event loop behind the scene, so each time I'm blocking on a future, I know roughly what it is doing. Meanwhile for async/.await, I'm not fully sure I know what is happening behind the scene. Especially the "resume" of an async function after .await is resolved. So I want to know how the "resume" works and what the cost is.

Because of self-borrowing and control flow, it's a bit more complicated of a transform, but the output is of the shape:

struct ImplFuture {
    state: State,
    stack: [u8; N], // aligned, though

enum State {
    0, 1, 2, // one per potential resume location (each await, plus start and after finish)

impl Future for ImplFuture {
    fn poll(Pin<&mut self>, context: Ctx) -> Poll {
        match self.state {
            0 => {
                // the start of your function, using self.stack as the stack
                self.state = (next_state);
            // And so on
            -1 => panic!("exhausted");

Again, the exact transform is complicated, as it's a state machine for each resume point, and optimizations (lacking or present) make it slightly different, but this shows the shape of the output.

1 Like

Got it. Thanks! The outline is very clear.

I'd also recommend reading this series of blog posts by Tyler Mandry. It explains what a generator (and by extension an async function) compiles down to behind the scenes, and some of the cool ways that they get optimized. For example, if there are variables that are never used at the same time, the generated state machine can store them in the same memory.


This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.