No, I meant that I need to preload a lot of data and then reload it occasionally. Doing it several times in case of multiple processes would be terrible option (and because the need of reloading sometimes, load-then-fork wouldn’t solve the problem). In case of node its limitations are a reason that people abandon it completely when their needs go beyond its scope and choose Erlang/Go/Java.
It is the only option on purpose for a lot of reasons.
Many of them simply come to “using parallel programming in C/C++ is hard even for experienced developers, so we can’t do that even if we wanted to, and we spent last decade working around it”. When you have a language that does not have such limitation, nobody is throwing it away because “its such a great idea to go back to single thread” as you say. There are usecases where it may be good idea (like complex interaction with OS that may fail or block whole process), but I do not believe that the fact that we have many single-threaded applications is a proof that it is common case.
To put it another way - there are many benefits you can get from having proper concurrency and parallelism within single process, and there is no reason not to reach for them. One process per core comes with many limitations. It has been the only available model for some time, so people designed solutions around it, but getting rid of that opens a lot of new doors. If you really want to, there is nothing stopping you from programming the old way on thread level (share nothing),
and still get some benefits, but I don’t see really the point of limiting myself to doing it only this way.
I don’t believe anybody is talking about replacing Futures or Rusts current threading model.
Good, that was my understanding of your post.
Just creating a new super easy to use abstraction that will cover 95% of use cases at a reasonable level of performance.
I appreciate the need for that, but … Rust is not a language that aims to achieve that.
This can - and should - be achieved on a an application or a framework level, but not always libraries. Had the language had green threads of some sort, this would make async trivial for 95% percent of cases as you expect, but it doesn’t, and all other options are more complicated.
Also keep in mind that writing basic internet protocols is complicated anyway. Once they are implemented, using them is easy. Also lot of complains in this blogpost comes to poor documentation that doesn’t explain reasons why something is done the way it is (like lack of explanation for having service per connection - which I believe comes to the fact that service holds connection state). This certainly can and will be improved.
For example I could easily see applications being written like such
This is already done. If you are merely using (as opposed to extend/integrate with) a framework like Rocket, all of that will happen automatically already. There are many Rust frameworks that manage thread pools and provide you with all the benefits without you needing to know how it works and provide easy to use and understand abstractions.
Node.js didn’t rise for being the fastest. It rose for it being super easy to write applications that were fast enough.
I am not negating the need to be easy to use, though if it is your top priority, Rust is not the best choice. It is secondary goal at best here.