Is rotor-http the only really concurrent http server?

Hi! I have just started to read the Book and, in parallel, am trying to test those Rust HTTP services I have found be hosting on github. Saying "testing" I mean just something similar to

wrk -d 100 -t 100 -c 1000 http://some/example

I will not list all those I have tried to avoid criticism feeling :slightly_smiling: - be sure, no claiming. Some of them just deadlock (that is stop to accept connection and stop to eat CPU). More lucky do work, but accessing the service from a browser (while wrk is running) results in long waiting for response (up to wrk terminating).

The only exception I have found is rotor-server project - no deadlocks, immediate answer to browser.

Is my estimation of the HTTP servers area correct? Have I missed something else?

Rotor-http uses the mio library, which gives access to non-blocking I/O. All the other servers (as far as I know) simply use the standard library instead, which means that they have to spawn one thread per client connection.

Also, I'm a maintainer of tiny-http. If this library deadlocks, I'd be glad if you opened an issue.

@tomaka, no-no, tiny-http didn't deadlock in my experiments, and it is also rather fast on browser access (I mean with wrk running in the background). Is there tiny-http-based framework?

I have a small private project that I use for my own purposes, so it can break at any time and doesn't support the features that I don't need.

Other than that, I'm not aware of any.

Non-blocking/Evented I/O ยท Issue #395 ยท hyperium/hyper ยท GitHub - aha, have found this remarkable issue. Probably, millions of rust developers with all possible patience are waiting for that team to resolve the issue... :slightly_smiling:


I actually had a good experience with
I didn't have any issues with it, although I tested only with a simple "Hello World" and wrk. The performance was good.

Hyper's server can run out of threads quickly if they support KeepAlive, as that ties up the thread until their current client makes the next request, or until a timeout occurs. In my servers, I turn off KeepAlive whenever the thread pool is running out of threads. In practice, this workaround is sufficient while we wait for a stable feature-rich non-blocking event driven server.

Once a non-blocking event-driven server becomes available, switching over to it is not going to be easy, as all existing request handling code must be rewritten to never block, but rather register events and callbacks.

Yes indeed :slight_smile: I'm personally looking forward to when the rotor-http client API is ready. Although that's not the same as servers it'll be nice to be able to write asynchronous client libraries