Async for lib crates in mid-2020: Tokio, Async-Std, Smol – which one to choose?

Hi,

I'm working on a library that could benefit from async execution.

A lot happened in the last 6 months in this space, so I'm wondering what the best way to build my lib would be.

Ideally, users could choose the async runtime of their choice, but how do I make this possible?

I have no experience and almost no knowledge in regard to async/await in Rust (and I'm new to Rust in general), so I'd be grateful for every hint and advice.

At the moment I assume, the way to go is Smol (because Smol seems to run async-std and seems to be able to run Tokio) + some helper libs that are compatible with all async libs (I need a WebSocket client, which could be async-tungstenite, maybe a thread pool (for CPU intense tasks), and I probably need an async channel lib).

If the above currently is not possible to archive – what would be the next best thing?

Writing async libraries that support any runtime will require advice specific to the thing you are doing.

Beware of falling in the trap of considering things like async-tungstenite executor agnostic. Silently spawning an async-std or smol runtime in the background regardless of which runtime the user is using is not being executor agnostic, even if it looks like it is working.

For the specific case of async-tungstenite, the library that can actully be used with any runtime is tokio-tungstenite. It provides executor agnostic internals, along with a Tokio integration. The async-tungstenite library uses the executor agnostic parts of tokio-tungstenite and puts an async-std integration around them.

As for choice of runtime, I generally recommend Tokio.

Edit: It looks like stuff has happened regarding the tungstenite libraries since I last looked. In any case, the pitfall of always starting an async-std runtime in the background and calling it executor agnostic is common.

1 Like

I'm working on a DevTools Protocol client, that can be used to automate certain web browsers (similar to Puppeteer for node.js).

Basically, there is a WebSocket to which commands can be sent, and through which return values of these commands and events will be sent back.

One WebSocket can have many sessions (for example, one session per tab), which is the reason why I think async would be ideal for this use-case.

I'd prefer to use threads, but this seems difficult to archive. Basically, I need a dispatcher thread that waits for incoming WebSocket messages, at the same time sends WebSocket messages to the server and also waits for incoming messages via a Crossbeam channel. It seems an asynchronous approach would make this easier.

Because I anyway planned to test the performance of an async-based client, I thought I would go ahead and start with that.

Thanks, for the hint! I wasn't aware of that. This makes the situation even worst, in my opinion...

What would be the best way to recognize crates that seem executor agnostic, but depend on a specific executor? See if there is a non-optional dependency to a runtime?

It seems, Tokio is used and recommended the most, and I wouldn't be completely against using it exclusively (however, from what I have read, I'd prefer async-std, personally).

Maybe it would be a good approach to start with Tokio, and later – when compatibility between runtimes hopefully has improved – to update the API to be executor agnostic.

That being said, what do you think about using Smol to run other runtimes?

For example, there is this example that demonstrates how to use smol with async-std, tokio, surf, and reqwest:

After thinking about it... am I correct in thinking that with Smol and Tokio there would be two runtimes running at the same time?

I assumed first, Smol would somehow switch Tokio to use Smol, and there therefore would be only one event loop and so on.

Is that incorrect?

Having looked at the newest version of async-tungstenite, I believe that using it should indeed make it possible to write an executor agnostic library. In your library, you can depend on it like this:

[dependencies]
async-tungstenite = { version = "0.6", default-features = false }

This will turn off all of the integrations with various executors, and the user of your library can enable more features and give you a WebSocketStream configured to use the runtime they want.

You can use a [dev-dependencies] section to enable integration with a specific executor in your tests.


The best way to recognize these libraries is to click the Source button on docs.rs and open the Cargo.toml.orig file. Here you will find the dependencies of the library. If it has a non-optional dependency on async-std, that is a good sign that it silently spawns an async-std runtime in the background.

As for Tokio, the library has been written such that it is not possible to silently spawn a Tokio runtime in the background in the same way as you often see with async-std. Tokio uses feature flags a lot to enable or disable various parts of the library, and there are some crates that are executor agnostic but still have a non-optional dependency on Tokio.

An example of this is the Hyper crate. It has a non-optional dependency on Tokio because the Hyper crate uses a trait defined in the Tokio crate. In fact, with the features that Hyper enables, the actual runtime part of Tokio is not even compiled. The Hyper library then has an optional runtime feature, which adds integration with the Tokio runtime on top of that, and enables additional features of the Tokio dependency. It is not super clean, but you can find an example of using Hyper with async-std here.

It might be a bit unfortunate that you end up with a dependency with Tokio, but here I am using "executor agnostic" to mean "doesn't spawn multiple event loops".


I am not super familiar with how smol works internally, but a quick look at the source makes it seem like the tokio02 feature flag on smol will result in running two runtimes at the same time. It defines a global variable containing a Tokio runtime, and uses Runtime::enter to make that runtime available to anything running on smol's runtime, but does otherwise not change anything else in the library.

Out of curiosity, what are the specific things that make async-std be more appealing to you?

2 Likes

Thanks, Alice.

So it should be possible to create an executor agnostic library. If I find an executor agnostic channel crate. async-channel looks like that.

Out of curiosity, what are the specific things that make async-std be more appealing to you?

It seems, from a technical point of view there are not many reasons anymore (if any) to prefer async-std offer Tokio. Even stjepang wrote somewhere (if I remember correctly), that Tokio has added most previous advantages of async-std.

Async-std probably has less historical baggage, which I like.

My preference seems to be more "political". I wrote almost one page explaining it, but I very easily could be wrong, and therefore I won't post it.

It basically was about the reason for the split between the Tokio and async-std devs, and that there now is no big difference between the projects (and probably a ton of resources were wasted). But it's likely that I didn't follow the story correctly, so I shouldn't write too much about it :slight_smile:

During my research regarding the reasons why I prefer async-std, I stumbled upon a few interesting links that are relevant for this discussion:

  1. https://www.reddit.com/r/rust/comments/fnj12j/rust_async_and_the_terrible_horrible_no_good_very/fla343f/

The question is, why compatibility between runtimes is difficult and stjepang wrote one of the answers.

He also links to this very interesting thread on Twitter where he basically explains how to write an executed agnostic lib:

  1. I found agnostik by the Bastion project, which seems to be relevant (the tag line is "Executor Agnostic Runtime that can run your futures with your favourite Executor.").

  2. And async_executors, which is similar to agnostik

Here is a recent comparison between the two by one of the async_executors devs.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.