I know their documentation is excellent, but I am more a learning-thru-doing kind of guy than just reading, and discovering stuffs on the way works best for me. So, what are the things I must know about before jumping in? I am just looking to know Rocket specific things, like what are the gotchas, if any, or if anything different about rocket compared to the popular node frameworks like koa, express or python’s flask.
I’d also look into Gotham, which unlike Rocket, uses Tokio and async Hyper. Better performance, and actually works on stable Rust.
Gotham looks cool, but how much does the synchronous nature of Rocket affects performance?
It’s probably not going to be noticeable unless your service has a fairly large number of users. Most “traditional” servers (like apache) are synchronous and good enough for 95% of applications, so using async is probably overkill.
The big difference between Rocket and other frameworks is that you have full type safety. Instead of getting some
Request object then having to pull all the stringly-typed information out of that, potentially dealing with malformed input or other errors, you know that if the information gets into your response handler it is guaranteed to be well formed.
This may sound like a small thing, but it makes your life a lot easier because you don’t have deal with checking all the input manually. In Go I used to spend about 2/3 of a request handler just checking that the inputs are well formed and dealing with error cases, which gets pretty monotonous. Whereas with Rocket your entire request handler can be actual application logic, making it feel like you are more productive. Error handling is also really ergonomic, if something may fail (e.g. the user requests a database record which doesn’t exist) you just return a
Result and can use
? to exit early if you encounter an error.
Can you give me some examples of cases where asynchronous model would fare better?
This is a well covered topic in general, and would be just as applicable to Rust as other languages.
The gist is an async model allows for lower resource utilization to service many connections. In particular, memory footprint would be much lower than a thread-per-request model. Also, at some point, kernel scheduler will bog down with too many threads in its runqueues. You can probably get pretty far using thread-per-request on modern kernels, but you’ll hit a ceiling where things will thrash.
Would there be much of a difference between async/sync if it was running behind a reverse proxy (eg nginx)?
You mean if the load against your server is effectively reduced by having a proxy in front? I suppose that would help if the proxy is able to shed enough load off your server such that the # of requests its servicing (and thus threads in-use) doesn’t bring it over the tipping point.
If you’re not using an asynchronous model, and you’re exposed to the web, it’s much easier to get yourself DDOS’d out of business. It takes a lot more effort to cripple a server with asychronous connections, and latency can be better.
I think both are, for all intents and purposes, just as easily DDOS’able if unmitigated/not designed right. I wouldn’t use this as a “selling point” for one or the other.
Latency can actually be better with a thread-per-request model, but it all depends on circumstances.
Having said that, http://www.aosabook.org/en/nginx.html is a good overview of the evented model as used by nginx (it somewhat “popularized” this approach when it came out, at least in the web server/proxy domain).
Another good resource on the topic is http://www.kegel.com/c10k.html.