What does Rust need today for server workloads?


Good morning rustlers! Today I’ve got an intentionally broad question to ask. If you’re someone who would like to use Rust “on the server”, be it as a hobby or in your production environment, what is blocking you from using Rust on the server today?

One of the roadmap goals for Rust in 2017 is that “Rust should be well-equipped for writing robust, high-scale servers.” We’ve already made some great progress on this item throughout 2017 but now that we’re reaching the midway point through 2017 I’d like to take stock of just quite how far we’ve come and where our next highest value targets are. Some examples of work towards this goal so far are:

  • Continuing development on asynchronous support in Rust with the futurest crate and Tokio stack.
  • Having a high performance HTTP library like Hyper
  • Continued development web frameworks like Rocket
  • New production users of these technologies like linkerd-tcp

Even given all this though, I’m confident that we can continue to go much farther and really drive home this roadmap goal by the end of the year! I’m hoping to have a good open-ended discussion about what you think is missing from Rust’s story-on-the-server. Are futures too confusing? Is async/await syntax a blocker? Is our synchronous I/O story lacking critical features? Are there still key libraries missing? Do existing libraries need an API review? All kinds of questions and answers are more than welcome!

So, what is blocking you from using Rust on the server today?

I figured it’d also be good to start out by sharing my own thoughts! I personally feel that’s Rust’s story on the server is going to be very firmly rooted in the “async world”. This may not be for the precise reason you’d imagine though! It’s generally accepted that best-in-class performance when you’re doing lots of I/O necessitates the usage of asynchronous I/O, which I think is critical to Rust’s use case in servers.

Not all servers, however, will need the highest level of performance! For the same reason that many servers today are dominated by “slower” languages like Ruby and Python, I don’t think we need to 100% focus all effort for Rust-on-the-server into speed. Despite this, however, I still believe that the “async world” is the highest value target. How often have you needed to time out an operation? Maybe opportunistically try two requests at the same time and cancel the second when the first completes? Maybe you’re waiting for both a remote RPC and your worker thread to produce a result to continue. Or perhaps you just need to set up two independent data processing pipelines that get woven together at the end? I’ve found these sorts of desires to be quite common when working with servers, but they’re all quite hard to get right with synchronous I/O while trivial to get working with async!

I believe working with asynchronous I/O and futures empowers you with so much more than you’d otherwise get with typical blocking and synchronous I/O. These extra superpowers, which many non-performance-critical servers end up needing, are what we can capitalize on in Rust. Another huge benefit to staying rooted in asynchronous I/O as well is that your server can scale as you need it, if you discover a performance bottleneck you’re already equipped with one of the fastest I/O frameworks in the world, it won’t take a whole rewrite to use it!

So that’s at least my personal rationale for pushing real hard on async vs the sync story in Rust today (which I also think is “pretty good” as-is). We’ve got a long road ahead of us though with async, particularly:

  • Getting up to speed on Tokio/Futures is empirically difficult, way too many people are bouncing off at this point. We need to refocus on our existing documentation and basically restructure it from the ground up to be more amenable to newcomers. There’s a lot of opportunity here and we’ve got a lot of ideas on how to do this too!

  • Async/await notation is going to be critical for widespread adoption of futures. Writing synchronous code is easy, and async/await is just writing asynchronous code but without knowing it! In terms of easing the learning curve this seems like one of our highest value propositions.

  • I have the feeling that there’s not as much production use of Tokio/futures as I would like. Production users are critical for motivating development of foundational libraries, providing real-world feedback about things like debugging and performance, and helping to flesh out bugs and stress everything to its limits. This thread here I hope can help get new production users in the pipeline!

  • Finally, I feel that we’re not quite where we want to be yet with the HTTP story. I think we’re still missing a prominent HTTP/2 library which means that common protocols like GRPC are difficult to use in Rust. Similarly on the HTTP/1 side of things I at least feel that we’ve come a long way but still have further to go before we can satisfy our roadmap goal.

If we can get all of those really nailed down by the end of the year then I think we’ll be in fantastic shape for Rust-on-the-server. The work by no means stops here, and I’m sure that you’ve also got your own blockers and opinions! I’m quite eager to hear what you’re thinking!


I’m going to put my vote in for async/await or some other form of coroutine.

This isn’t just about expressiveness for futures. I’ve begun writing a graph database as a toy learning/resume building project, the kind of thing where I’m going to see how far I can get and then stop. What I’m quickly realizing is that there are a ton of patterns where you could trivially use an iterator and be allocation free, but where the expressive overhead of implementing an enum and hand-coding the trait aren’t worth it. Ergo I’m planning to take the allocation hits just to avoid the pain, and then deal with the headache of optimizing it by hand-coding iterators later.

This wouldn’t be such a big gain except that Rust went out of the way to have a really good and expressive iterator API. In other languages with yield and/or async, you have to import modules and rite really ugly expressions: the gain in performance comes with a huge loss in readability. See Python’s itertools module, for example, where expressions end up being written right to left and nesting parens like crazy. But in Rust I’m already using it everywhere I can anyway, and the readability cost goes in the other direction: actually writing the iterators themselves is extremely ugly.


@ag_dubs told me today that bringing HTTP primitives into the standard library would be a huge, helpful deal.


My main problem is company buy in and so many libraries using nightly / non-stable language features.


I’m not going to rant about the fact that I find it bad that some existing Rust web frameworks copy the API design of frameworks from high-level dynamic-typed languages without taking into account all the specificities of Rust. I don’t think people want to hear about this again.

However one problem I think is going to be raised in the future (even though it’s not really “blocking”) is that people would write their servers in an asynchronous way, then could accidentally let some places in their code perform a synchronous operation (either directly or through a third-party library) with std::io.

Since an asynchronous server usually uses a thread pool of N threads to dispatch tasks, performing a synchronous operation would effectively block one Nth of the server for the duration of the operation.
If some place in the code of your server happens to perform a synchronous operation, then it could become an easy target for a DDoS (and I guess this is a more significant problem than hashmap collisions without a siphasher).

Of course this can be prevented by carefully reviewing your code and all your dependencies to make sure that they don’t use anything from std::io. But some compiler help would probably be welcome here.


I’d love to get more details here!

  • What do you consider “HTTP primitives”? Just basic types? Clients (like reqwest)? Full-blown server?
  • Why std in particualr? Would it be enough to have a highly visible crate in the ecosystem? Show it in the cookbook? Or a crate in rust-lang?


I have few rust servers for web apps, so my simple answer is “nothing”, but I think we need more DB-clients (for popular DB, not only Postgre) with handy API. Not ORMs, just clients. For MySQL we have just one and only client from 1 developer. If there is a team dedicated to the “web” branch of Rust usage, then please consider to create some “semi-official” DB client with easy to use API and without any “unwrap” or other panicking code. Without DB web server is useless.


This might be a bit low-level and is probably not really a blocker, but I noticed the lack of std::net::UdpSocket sendmmsg() + recvmmsg() support which is needed for high-throughput UDP streaming.


This may be a bit outdated, because it’s been months since I do any Rust dev (not because of this issue, just haven’t had time)

The biggest hurdle I ran into was not so great support/documentation for SSL on Windows. My main computer is Windows (cause of gaming), and any time I wanted to use a “web aware” library, if I was lucky SSL was a feature I could turn off on cargo, but that wasn’t always the case, nor was that ideal.

Part of it is definitely my unfamiliarity with ddl’s on Windows, and it not being as easy as installing libssl or similar as one would do on Linux. So this could be considered out of scope. But I struggled to find good documentation on how I could have good SSL support on Windows, and most rust web libs had some dependency on sysssl.

The second issue, which is more of a personal block, is the community (or at least many popular libraries) mostly building around unstable, but I know there is a lot of effort being put towards moving away from that.

Other than that, Rust is lovely.


Do you have some particular libraries in mind pinned to nightly? Always good to keep tabs on the blockers for moving-to-stable so we can help prioritize language features!


Just to clarify, was this SSL-on-the-client or SSL-on-the-server? In that did you want to connect to https servers or did you want to start a server that itself had an https-like interface?


Since nobody has mentioned impl Trait yet: Is there a way to use Futures in Rust stable without having to either “box all the things” or expand (and update!) the types by hand?

If not the current situation is something like: “Wanna use Futures? Then you need to use nightly.”, and nightly is something that people using Rust on the server (for e.g. performance and reliability) might not want to use.

Coroutines/async/await improve ergonomics, but impl Trait does so as well.


@chamakits re TLS, check out https://crates.io/crates/schannel for windows specifically, or https://crates.io/crates/native-tls for cross-platform networking.

This isn’t a blocker, but one pain point for some stuff I’ve been doing is around HTTP keep alive. On the client side, current Hyper’s connection pool is kind of painful to work with since it can’t check to see if connections in it are dead - servers will commonly kill connections after a minute or whatever of idling, which then causes the next request that uses that connection to immediately hit an IO error. On the server side, idle connections consume a worker thread, which means the keep alive timeout has to be way lower (I think it’s 1 second by default!) than it could/should be.

I believe async Hyper should solve these issues, so I’m excited for that landing!

Even with async Hyper, though, I will be looking for nice blocking abstractions on top of that on both the client and server side. Most of the server side work I do is pretty CPU bound, so a single threaded nonblocking setup doesn’t make a ton of sense. Async connection management on top of a blocking interface should be the best of both worlds for these kinds of workflows though!

Stable procedural macros will also be nice - I come from a Java background so a syntax extension to process a Jersey-like impl block is something that I’ve been meaning to do for a while:

pub struct FooResource { ... }

#[resource(path = "/api/foo")]
impl FooResource {
    #[post(path = "/ping")]
    fn ping(&self, #[query_param = "message"] message: Option<String>, body: PingBody) -> Result<PingResponse> { ... }


Is there a plan to add async/await as keywords in Rust? At least to keep them reserved until it is actually implemented.

Regarding what’s missing, I would say stable http (hyper and some kind of standard crates for http itself). I don’t know if a stable hyper would require a stable futures+tokio though. The other thing missing are API crates for all the third party services one can use in a webserver but with a good http client (reqwest?), it shouldn’t be that hard to build them.


Rust needs a good asynchronous http client, in version 1.0.
Based on that, we need some SDKs (AWS, Google Cloud…).
For example I’d love to use Rust for a “function”, subscribed to AWS SQS, and calling other web services, all asynchronous with back-pressure, maintaining a pool of open connections…

What is also missing are all drivers for different data stores.

Finally, I think that Rust could be a top player in the “Function As A Service” landscape given its very quick startup and minimal memory consumption. For that, we need FAAS providers or frameworks capable of using Rust functions.


As a fan of Rust, but not some one who writes it regularly please take my comments lightly. The two most important things preventing me from writing more Rust and reaching for it as a primary tool to implement HTTP services are the progress of asynchronous IO (Futures/Promises or what-not) and an HTTP library that implements them.

I realize that this is covered with:

• Continuing development on asynchronous support in Rust with the futures5t crate and Tokio5 stack.
• Having a high performance HTTP library like Hyper6

However, this is often the set of criteria I check whenever I reach to evaluate rust for my next project time and time again. To answer some proposed questions:

• Are futures too confusing?
Nope. I have a very strong opinion that futures are the correct approach and are not confusing for a language with strong functional programming idioms. I also like the approach of building bare-minimum futures compared to Java 8’s CompletableFuture (which is good, but maybe too kitchen-sink for Rust’s current age).

• Is async/await syntax a blocker?
Nope, but it’s a darn rad nice-to-have. I believe async/await syntax is “good as hell” and partially because it can be used to explain what is possible when asynchronous results ubiquitously modeled with Futures.

Also, I would like to see a Request and Response and Future traits or objects eventually moved into the standard library. I realize that this request is lazy, as the follow-up questions are the more interesting and difficult ones to answer, but ideating on when and how this should be done would bring another level of legitimacy.


I think this combination of things would make rust way better than any of the alternatives:

  • Futures support in the existing libraries (hyper, anterofit, diesel?)
  • Async/await
  • impl Trait in associated type position
  • Migration framework (like alembic for sqlalchemy)
  • Docker/kubernetes integration tools
  • Starter kits for various service types

I think a really cool goal would be to support something like this:

cargo starter-kit kubernetes-hello-world
git push <github repo>

It would automatically create an async hello-world service with a rest API, test suite, and a travis script for continuous deployment to any kubernetes cluster, specified via CI secrets.


Migration framework (like alembic for sqlalchemy)

Do you want only one that generates the migration automatically? I’ve been annoyed by alembic too many times to rely on automatic migrations. If writing SQL is ok, Diesel has one (https://github.com/diesel-rs/diesel/tree/master/diesel_cli) and I also made one: https://github.com/Keats/dbmigrate


I think a good and mature library for metrics (like java metrics one) is another thing that sometimes (at last in my work) is a must.


Metrics are key to building production systems. A metrics library, with some slick ways of instrumenting code, would be great.