Why Rocket isn't async, while Gotham is?

I have just read the blog article titled Rust in 2017: what we achieved and it is said that the Rocket web framework isn't async, while the Gotham framework is.

I'd like to clarify what is the definition of "async" in this aspect?

I thought all web servers must be async because they can handle thousands/millions of connections simultaneously.

Gotham is using the Tokio framework. Is the the reason it deserved to be called async, while Rocket doesn't deserve to be considered async?

Almost all webservers don't handle millions of connections simultaneously, and it's easier to write a non-async server.

That’s pretty much what it means in that context.

If it can handle more than 1 request simultaneously, it is considered async. So, what is the definition of async here?

If so, it is unfair to Rocket being called non-async.

Means using async (socket) IO facilities provided by the underlying OS. This is the whole reactor/eventloop model that Tokio has. The typical alternative is a thread-per-request, which doesn’t scale well beyond a certain number of concurrent requests (the more precise threshold is OS dependent).

1 Like

That's not the definition of async that I'm aware of. Async means in this context that threads do not block waiting on data to be sent to or received from an individual client. They instead switch processing among a bunch of requests based on which one has data ready to process. Rocket is a more "traditional" blocking-IO based server where there is an individual OS thread per request.

6 Likes

The term async has a specific meaning in this context, as mentioned above. In that definition, it’s accurate to describe Rocket as non-async (this isn’t necessarily good or bad, mind you, just a statement of facts).

Correction:
If it can handle more than 1 request simultaneously and does not block while waiting for the other thread(s) to finish, it is considered to be async.

Rocket is a more “traditional” blocking-IO based server where there is an individual OS thread per request.

So, you're saying Rocket could possibly block within a thread. Could you explain more about this?

That's still not an accurate definition. Any server of any kind is able to handle more than one request at at time. Waiting for other threads has nothing at all to do with the IO model used.

A new thread is assigned to each connection. If it needs to read more data from the client, it asks the kernel for it. If the kernel doesn't already have data buffered, it will block the thread until some arrives. Similarly, when it wants to send data to the client, it asks the kernel to send it. If the kernel's internal buffers are full, it will block the thread until there's space.

In contrast, in an asynchronous IO model, if the server wants to read more data from the client and the kernel doesn't have any data buffered, it will immediately return and tell the server there's nothing available. A single thread will be managing many connections simultaneously. The server uses an API such as epoll to wait for any of those connections to become ready at the same time.

2 Likes

Hi, @ikevin,

I’d like to clarify what is the definition of “async” in this aspect?

Sometimes people confuse async with parallel. It is possible to be async with just a single thread. If a single-threaded, synchronous (non-async) web server needs to, for example, access a remote database to satisfy a web request, then the thread servicing the web request will block on the result from the database. A second web request will be ignored until the first has completed, even though the web server is, in fact, idle waiting for a response from a remote system, in this example.

An async web server (again with just one thread) can "switch contexts" by issuing the remote DB query, then, instead of blocking, service a second request, perhaps respond, then retrieve the now available query result for the first request, and then respond to the first request.

I thought all web servers must be async because they can handle thousands/millions of connections simultaneously.

As others have pointed out, not all web servers are async. The advantage of an async web server is that it can service more requests on a single thread than a synchronous web server. Given that each thread is a resource that has OS overhead, async web servers should have better scalability, provided dispatching received requests are not bound by local resources (which they frequently are). It's worth pointing out that async web servers are not necessarily faster (note: both Iron and Rocket are synchronous Rust web frameworks, yet significantly outperform Gotham in this simple benchmark--the use case matters!): Benchmarks ? · Issue #27 · gotham-rs/gotham · GitHub

And finally, to your title question:

Why Rocket isn’t async

According to Rocket's author:

The Rust asynchronous I/O space is in heavy flux. The Tokio 0.1 release today is a milestone in reaching stability, but there's still a long road ahead. Rocket will be fully asynchronous in the future, but the approach must be made with care. Usability is of utmost importance, and performing and handling async. I/O with Rocket cannot be an exception.

6 Likes

IMO "non-blocking" and "async" should not be confused.

The defining property of async IO is that it is event driven. The piece code that processes the data is called "on-demand" as soon as the data is available. This can be accomplished by the means of callbacks, events, promises, futures, continuations, or similar constructs.
But coroutines, green threads, fibers and such are usually also considered async, even if the code really looks and behaves almost the same as blocking code.

Async IO is usually built on non-blocking IO and an event loop. But one could also imagine different implementations, e.g. by using hardware interrupts or signals.

3 Likes