Hi Folks, we’re starting a new enterprise software platform (like Salesforce, SAP, Workday) and chose Rust. The well-maintained HTTP servers I was able to find (Axum, Actix, etc.) are async, so it seems async is the way to go.
However, the async ecosystem still feels young and there are sharp edges. In my experience, these platforms rarely exceed 1024 threads of concurrent traffic and are often bound by database performance rather than app server limits. In the Java ones I have written before, thread count on Tomcat has never been the bottleneck—GC or CPU-intensive code has been.
I’m considering having the service that the Axum router executes call spawn_blocking early, then serving the rest of the request with sync code, using sync crates like postgres and moka. Later, as the async ecosystem matures, I’d revisit async. I'd plan to use libraries offering both sync and async versions to avoid full rewrites.
Still, I’m torn. The web community leans heavily toward async, but taking on issues like async deadlocks and cancellation safety without a compelling need worries me.
Does anyone else rely on spawn_blocking for most of their logic? Any pitfalls I’m overlooking?
To me easy cancellation is the biggest advantage of async. Real-world services need to deal with everything failing, including failing by being stuck or being hopelessly slow, so everything needs timeouts and aborts. In async, a timeout can be added on any Future, without having to explicitly configure timeouts on every single leaf function.
IMHO complaints about async are overblown. I suspect many of them are caused by extreme overuse of tokio::spawn that adds onerous 'static + Send requirements, when 99% of futures never need to be spawned, and futures can be combined using streams and join_all.
I've never used tokio::select! despite writing many thousands of lines of async code, so the cancellation problems that are specific to this macro are unknown to me.
Whether spawn_blocking is necessary or not depends what you're implementing. You're going to need it if you need to have low tail latencies, or you have really long-running CPU-bound code, or code using disk I/O with disks that may be slow (not local SSDs) or lag for some reason (like filesystems under heavy load).
Async is harder in generic code. Futures hold arguments of async functions, including borrowed references, adding lifetimes that never appear in sync code. Proper async closures and async function traits are just being added. Before they required knowing pretty advanced workarounds.
Whether any of that is a problem depends on what code you write. If you write async frameworks/libraries/clever generic components it can be a barrier. If you just use existing not-too-clever libraries with async fn you may not need to write generic async.
There is nothing that will intrinsically go wrong if you do this — in particular, it will not have noticeably worse performance than if you were using a an entirely non-async web server.
Tangent, for your information
`select!` is just “pick the first future that completes and cancel the rest”, so there is no intrinsic hazard about `select!`’s cancellation moreso than any other cancellation (i.e. futures not tolerating being cancelled).
There is one big beginner trap when trying to use select!in a loop: if you don't create futures outside the loop, they’ll get cancelled repeatedly, which may not be what you had in mind:
loop {
select! {
a = do_a_which_completes_slowly() => {...},
b = do_b_which_completes_instantly() => {...},
}
}
If the intent is for do_a_which_completes_slowly() to eventually resolve, it won’t since it is getting cancelled and restarted every iteration. The last time I saw a pattern like this, IIRC, it was trying to lock an async mutex (which would yield before acquiring the lock from a different task releasing it) and racing with an always-nonempty channel read (which would always resolve immediately). But, this is “obvious” once you are familiar with the difference between creating a future and awaiting it; it's the same control-flow mistake as
while let Some(item) = SomeIterator::new().next() {...}