Avoid Async Rust at all costs" - comments from experts?

Blog - "Avoid Async Rust at all costs" - comments from experts?

Hello everyone

I am trying to learn Async Rust.
I did some Googling to help me and found this blog with rather alarming title.

Before I spent my time (as Rust async learner) to read and understand it I wonder if this blog is worth reading and if it makes mostly/many valid points and reaches correct conclusions?

Thank you very much


There is quite a bit of debate about async vs threads, function coloring in async, etc. I don't recommend going by just one blog article. It is better to learn it yourself and then reexamine the issues and tradeoffs.

If you have decided to use Rust async, it is worth asking why. There are performance benefits, but as the blog article and the Rust async book itself says, not everyone needs this level of performance. The async book has information about async pros and cons, and it seems fairly balanced to me. So I would start there.

But another reason to learn async, or at least become familiar with it, may be to use tools in the Rust ecosystem that themselves use async, such as a web server or web framework. In that case the blog article and similar debates are not really applicable.


It sounds like the author knows what they're talking about. You'll find tons of blogs talking about the downsides to async Rust. It's not a perfect system.

But it looks like the author is mainly addressing themselves with the title, not recommending everyone else to avoid async Rust. And keep in mind people who are content with async are not usually making blogs about it.

It's also not that hard to learn async once you know the rest of Rust. If you haven't learned Rust yet, do that first.


Expanding on what @jumpnbrownweasel said, if you're going to do web development, pretty much every Rust framework for it is async, also sqlx the most popular rust database library is async, so in cases like these if you really want to avoid async you'll either have to find some more obscure alternative or write it yourself.


I'm not an async expert by any means. My journey went something like this:

  1. not knowing async rust
  2. working through some async rust tutorials
  3. working through a mini tokio: Async in depth | Tokio - An asynchronous Rust runtime
  4. getting enough muscle memory to debug most issues (i.e. blah not Send, foo not captured, forgetting to put an .await)
  5. using async everywhere
  6. hating the compile time costs of async
  7. minimizing usage of async in my code; i.e. as much as possible refactor coded to be in sync functions, and have async just be little "router wrappers" for waiting on stuff and feeding it to sync fns

I think this is the journey many goes through: you have to put in the time to learn async, even though in the end you will likely try to minimize your usage of async.


I read the article last night and my takeaways are that the author doesn't have a need for async. This is fine, and very much expected. Most users of async don't need it [1]. And my reading is that the author argues passionately that others don't need it, either. They may be right. Async is grossly overused where it adds complexity that doesn't hold up its own weight.

If you need async, you'll know you need it. And if you know you need it, you'll make it work (because you don't have a choice). But for everyone else, just use sync I/O.

In other words, I agree with the implication, but I disagree with the presentation.

  1. I should qualify this by stating that I have been in a position where async was a nonnegotiable requirement. Writing a server that needs to serve half a million (or more) concurrent users is the bread and butter of async I/O. It's made for this use case. This is a niche use case, and as a niche most people only need to handle tens of thousands of concurrent users. And for that, system threads will do the job without requiring you to jump through so many hoops. Also: C10k problem - Wikipedia ↩︎


I think aysnc does have valid use-cases ( for example dealing with network IO ), but it should be used rather sparingly - MOST code should not be async.


Note that code can be "not async" in two different ways:

  1. Code which uses blocking operations instead of await (e.g. does IO with std::io and std::net).
  2. Code which neither blocks nor awaits (such as pure computation). This is what you're talking about in these sentences.

Regardless of whether your application uses await or blocking, it makes sense to strive for having code in the second condition — to keep your IO operations separated from your algorithms, when feasible. This has a lot of advantages in maintainability, in addition to the compilation time and code size effects.

But this is a very different question from “should I use blocking or async for my IO?" — a question to which the original blog post seems to be saying “you should use blocking”.

(Personally, I have a practical disagreement with this; I find that in the kind of applications I write, cancellability or select!-like control flow is often important, and so the characteristic of blocking that you only get to block on at most one kind of thing means that using it exclusively is pointlessly complex. This has nothing to do with whether async gives you higher performance concurrency! And I don't hesitate to use a blocking thread when a thread will do the job well.)


"Avoid at all costs" is an intentionally provocative title. Obviously you shouldn't do it "at all costs", because the costs will soon eclipse whatever problems async has. "Avoid async unless you know you need it" would be a better title, but many people think they know they need async even if that's not the case. The blog addresses them. The issues listed are real, but they aren't deep. You'll likely hit them soon after you start writing any serious async code.

Async isn't inherently badly designed, in many respects it's pretty much the best there can be, and the async { } blocks themselves are great. But it's also true that it's "Rust in hard mode": whatever problems you have with Rust, async will amplify it, and add its own pile of issues and limitations on top. Some of them are slowly chipped away (in fact, a limited version of async traits was recently stabilized), some of them will likely exist forever. For example, the footguns of async cancellation are IMHO a core part of Rust's async model, and thus will never be solved.

That said, async is here to stay, and its limitations only mean that there is a greater pressure on the ecosystem to go async-first. It's technically simple to go from an async API to a sync one: just wrap it in Executor::block_on (it has hidden costs and footguns, but it is simple). Going from a sync to an async API will likely require major changes to your codebase, since you'd have to deal with its idiosyncrasies, limitations and footguns.

Thus if you want to learn Rust, you'll have to learn async anyway, but do it when you're comfortable with the fundamentals of the language, and get yourself a good book. Certain domains, like most things related to network communications, basically force it on you.


Are you saying

  • we should structure codebases as 2 parts:
    • part 1: deals with latency, concurrency, waiting on external stuff; in practice: file system, network, ...
    • part 2: crunching/transforming data in a "self contained box"
  • part1 should be async
  • part2 should be no-async

If so, I strongly agree. This is much better characterization than my previous statement of:


I wonder if this can be further characterized as:

  • if you are talking to the "outside world", use async
  • otherwise, probably no-async

Yes, but the reason I spoke up is not the “should” there. What I want to make clear is that “you should avoid overuse of async code when plain functions would do" is not the same claim as "you should avoid async IO in favor of blocking IO". Those are very different directions to take your codebase.


If someone read the article and is unsure about async, I can recommend these two articles that provide insights "from the other side" these means devs that actually find async relevant and beneficial:

The article from boats is absolutely worth it. Even if you are an async sceptic.


I think I'm starting to see your point now.

My original claim was: "try to stuff as much as possible in sync; and use as little async as possible"

However, one (bad) way to do that is by using sync IO instead of async IO.

Your point is, the process should be "try to split the IO vs non-IO code; preferring non-IO code as much as possible"; followed with "it is perfectly fine to use async for the IO portion"

I.e. the goal is NOT "async vs non-async" (as that pushes towards blocking IO), but the goal should be "IO vs non-IO".

Is this the distinction you are going for? If so, I fully agree.

I find the original post to be somewhat confusing and misleading, honestly. Not in a "trying to be controversial" way, at least necessarily, but in a "I'm not really sure where you're coming from" way.

My main takeaway is that the assumption is that the only reason to use async is that it's better performing that threads, while I find it's simply much easier to deal with complicated cases than in the (explicitly) threaded approach.

There's some lifetime handling that scoped threads help a lot with now, but mostly it's that in async you basically never need to be concerned about if some other resource needs to be serviced on the same thread to prevent a deadlock; in other words, if you don't see it, spawn() it! This means, of course, that you can quite easily get to 100k/s task spawns, so the claim in the post that most devs don't need to handle 100k tasks seems rather putting the thread before the task as it were; don't ask if you can not use 100k simultaneous tasks, ask how you could use them!

That said, the performance claims that were there were still either confusing or rather slanted. Saying 2GB of memory committed to doing nothing is fine because that's how much chrome uses (Press X to doubt that normal, <100 tab usage actually commits that much physical memory, but I'm sure everyone here will immediately correct me :sweat_smile:) - then talking about bumping up your AWS instance size is pretty crazy. Like, I'm probably not running 100 tabs in chrome on my AWS server, why would I want to blow 2GB doing nothing on threads? Don't we want to be running 100k serverless functions/containers anyway? If I want to blow GBs of memory doing nothing I'd be using Node (oh wait, I am! :sweat_smile:)

Other things like talking about OS schedulers being M:N were also pretty confused - I'm not sure what that whole section was trying to tell me. That it's hard to do manually but it works better if you can do it is exactly why async exists as a language feature; you don't need to think about how to deal with the difference between the backing thread and the task, other than ensuring that it's Send (which the post barely mentions)

To be fair: there are absolutely things that kind of suck about async in Rust. But when most of them go away when you box or spawn something, and the rest are either common to any async implementation in any language, or even problems it would be nice to have in any other language, I'm not really sure I buy that you should be all that scared of it.


Writing an async wrapper around blocking sync code isn't much more difficult than the implementation of block_on, and I suspect that the hidden costs and footguns are also comparable:

enum Status<T> {

pub struct ThreadFuture<T>(Arc<Mutex<Status<T>>>);

impl<T: Send> Future for ThreadFuture<T> {
    type Output = T;
    fn poll(self: Pin<&mut Self>, ctx: &mut Context<'_>) -> Poll<T> {
        let mut result = Poll::Pending;
        let mut guard = self.0.lock().unwrap();
        *guard = match std::mem::replace(&mut *guard, Status::Fused) {
            Status::Pending(_) => Status::Pending(Some(ctx.waker().clone())),
            Status::Fused => Status::Fused,
            Status::Ready(x) => {
                result = Poll::Ready(x);

impl<T: Send + 'static> ThreadFuture<T> {
    pub fn spawn(f: impl 'static + Send + UnwindSafe + FnOnce() -> T) -> Self {
        let lock_fut = Arc::new(Mutex::new(Status::Pending(None)));
        let lock_thread = lock_fut.clone();
        let _ = std::thread::spawn(move || {
            let val = catch_unwind(f);
            let mut guard = lock_thread.lock().unwrap();
            if let Status::Pending(Some(waker)) =
                std::mem::replace(&mut *guard, Status::Ready(val.unwrap()))

Edit: Completely new implementation; was mildly concerned about race conditions in the old one.


It seems the author of the blog post is only really concerned with his own use cases. For embedded I'd argue that an async framework like embassy actually makes things easier.

Embedded sadly tends to be forgotten in many discussions, but arguably it is what the world runs on. If you have a vaguely modern dishwasher, fridge, microwave oven, car, ..., well it has some embedded microcontrollers making sure the thing works. The importance of making rust work well in this context cannot be overstated. As more and more appliances are getting connected to the Internet we want them to be written in a memory safe language (rather than the traditional C).


It's also surprisingly relevant to UI, where the "IO" often consists of waiting for the user or slow processes. You could for example implement the spawning of a dialog and waiting for the user's choice as an async function call. Something like an "are you sure?" popup when saving over an existing file. I think most systems and toolkits don't do it, though, but it can be a nice way of not having to make your own state machine for it. Because I bet there will be one in there, somehow, possibly hiding as a pile of booleans.


I think that async Rust was a big mistake with a lot of fundamental issues baked into the model (async cancellation, unnecessary Send/Sync bounds, poor compatibility with completion-based APIs, etc.) and, personally, I stay from it as far as possible.

Even in the area of network services synchronous code is absolutely fine for 99+% cases (hell, even today a lot of web stuff is powered by Python + uWSGI), threading support in OSes has improved significantly since the C10K days. Unfortunately, the last 1% where async is really needed (usually it's the scale of 100k-1M concurrent connections) is the area flush with cash (Amazon and co), so one can argue that it was important for the language to get its attention to boost its grow, even at the cost of long term health of the language. Finally, for various reasons a lot of high-profile crates in the Rust ecosystem have embraced the async-first approach, arguably, often unnecessarily.

So my advice would be to avoid async as best as you can if you do not work on services which should scale to 100k+ concurrent connections and you do not have to rely on async-only crates (i.e. crates which do not have viable sync alternatives). For example, if you need to download a file in your CLI tool, use ureq instead of reqwest. And if you have no choice but to use an async crate, try to isolate it behind block_on, so it would not infect the rest of your application.

Some have mentioned embedded programming for which async may be usefull. But I consider this area (async + bare metal) highly experimental in its current state, plus you can do bare metal programming without async just fine.


My working hypothesis so far has been:

  • If your code has to wait for another party and wants to be able to do something else while waiting for that party (including waiting for other parties at the same time), that's a candidate for async, especially if the number of parties to be waited for is unpredictable and may be large

  • If your code has a lot to compute and you want to compute things in parallel, that's a candidate for threads.

  • If both of the above is true, use both together

Typically, I would use async when writing a web service, and not so much for other things.

I'm curious to hear other opinions on this.