What does Rust need today for server workloads?

It's not a method on Tokio's TcpStream - you just select on the future returned by connect and a Timeout.

There isn't really such a thing as a socket level connect timeout

From a library user point of view, this is irrelevant detail. My point is that a network library is missing an essential feature that really should be there.

I mean the OS behavior, not the Tokio one. I want to control what happens before the packet comes into an application. It is controlled by ioctl() and setsockopt(), and it's very important.

1 Like

We are already using some Rust on production.

The main issue we face are builds:

  1. they are painfully slow
  2. we have no official ccache equivalent
  3. missing offline build support
  4. We have no official CMake support, there are some things cooked here and there
  5. We use buildroot and updating to the next Rust release is a bit of a challenge, there have been changes on how to hook it up almost on every release.

But the biggest problem is management buy-in, we have some new (isolated) pieces in production but management is still hesitant on moving away from C to Rust.
It's very hard to communicate to non-technical managers the benefit of moving production to Rust.
Regardless of how much manpower is wasted tracking C bugs that Rust make it impossible for them to happen in the first place.
Their counterpoint is the few people that took a look at it (outside those pushing for Rust) said it was too complicated, or spent too long to get something simple done (think of people coding C for 20+ years). Management extrapolates that and sees that production will halt and they much rather spend the time fixing bugs they know how to characterize than paying for the rust learning curve.

5 Likes

Probably not true. Note that Mozilla ships Firefox for Android using Rust, and all Android targets are non-Tier-1 for Rust purposes. (Firefox for Android ships for ARMv7 and x86 on the release channel. Firefox Nightly exists for Aarch64 also.)

On the other hand, non-Tier-1 builds aren't in a perfect condition. It's rather worrying that I don't even use them actively but have been able to find bugs that should have been discovered by the first person trying things out (Exhibit A, Exhibit B, Exhibit C).

It seems to me that if you want non-Tier-1 platforms to work, the best thing to do is to use them actively and report bugs instead of waiting for Someone Else to do it.

It seems to me that there's an inherent contradiction in seeking to do stuff with cutting-edge tools but then limiting the tools only to what ships in a distro that by policy sticks to old software. I don't think think the Rust community can resolve that inherent contradiction.

I think "such as" it the key. Debian isn't the only distro. If you want in-distro Rust support without rustup from a distro, perhaps Fedora would be a better fit for your needs at this time. As far as non-Tier-1 CPU architecture support goes, Fedora deserves credit for getting some of them supported (at some tier) by the Rust project.

If the issue is that the software you write has to be buildable from source by users whose tools are constrained to what shipped on Debian stable, it's too late to help out with the upcoming Debian stable, since the deadline is today.

It's worth noting that Debian's policy is to update Firefox ESR on the Firefox ESR cycle instead of the Debian stable cycle and the next Firefox ESR after the current one won't build without rustc and cargo. Therefore, there's a chance of rustc and cargo packages meant for Firefox build only appearing in Debian stable when the current ESR goes out of support.

It might be good for the Rust team to make the effort to designate fixes that are known to be security fixes so that a downstream practicing a policy of sticking to an old version with cherry-picked security fixes can more easily locate the security fixes. I think it wouldn't be a good use of limited resources for the Rust team to do security updates for old versions of Rust when

  1. Rust tries hard to not to break old code so as to make regular updates acceptable (I know this isn't perfect, I've had my code broken by a rustc update once) and
  2. except for Firefox ESR and Chromium where the reality of the size of the backporting task outweighs Debian's normal policy, Debian doesn't ship upstream security updates as-is anyway but cherry-picks changes onto whatever base Debian stable shipped.
2 Likes

Why are you unhappy with tokio-postgres, mysql_async and pleingres?

1 Like

I'm using nixops and wrote https://nest.pijul.com/pmeunier/nix-rust (an alternative to Cargo, running on top of Nix). I can share build products between builds and between projects, and still get reproducible builds. Cargo and Rust get updated automatically without any need for Rustup.

6 Likes

I believe "SQL server" here specifically refers to Microsoft SQL Server, not SQL databases in general.

I wrote up a connect_timeout implementation here: https://github.com/alexcrichton/socket2-rs/pull/1

Example usage: https://github.com/sfackler/hyper-timeout-connector

7 Likes

FWIW, using raw mio doesn't have those overheads. I've done pretty extensive benchmarking on this and you can get speed equivalent to writing the the same epoll loop in C

2 Likes

I plan on starting work soon again on my actor library, which will provide typed actors and fault tolerance (to a point). If you're interested in contributing (goes for anyone else), feel free to hit up any of the listed issues and ask questions or where to start:
https://github.com/insanitybit/derive_aktor

3 Likes

Yeah, like sfackler says, I'm specifically referring to Microsoft SQL Server. That's why I used the caps. :stuck_out_tongue:

2 Likes

Two things that I hit recently that is making my life difficult. No AMQP 1.0 library and the STOMP library is broken. I'm having to shift some design choices because of this.

1 Like

The most important things to me for servers are async/await and database drivers. I think async/await is well underway, so I will focus on my issues with database drivers.

I specifically work with Postgres, using the postgres library. I think that 90% of my development time is spent debugging database issues at runtime. I would really like a way for my database actions to be checked at compile-time. I've looked at diesel, but it's documentation is very lacking, and I don't think its use of migrations translates well to rust.

2 Likes

I am doing evaluation of feasibility of applying Rust for mission (life) critical large scale development of web API services.

Here are the main issues I discovered so far:

I have not touched during evaluation, but the following is going to be important:

  • database drives with async (futures) bases API for remote databases (i.e. client-server architecture)
  • high-performance SQLite database driver WITHOUT async API
  • kafka client

The following is going to be needed eventually, but less important:

  • raft consensus protocol integration
  • docker API clients

Some of that we may contribute to if there will be enough buy-in in Rust from all involved parties, but certain threshold of "readiness of Rust" is required.

Hope it helps

3 Likes

The futures and async programming is not difficult as a concept. It is quite easy in C# with Tasks framework (even without using async/await) and it is supper easy in Scala with fundamentally well built-in Futures. Issues like this and this make it hard.

I disagree that it is critical. The above mentioned issues with future are quite more important, in my opinion. Future combinators like map and flatMap (and_then in Rust) are quite nice and sometimes make code easier to navigate and understand than async/await. People, who are used to Futures frameworks with well-designed combinators and features in other languages like Scala, C#, JavaScript/TypeScript, should find well-designed and similarly-behaving implementation of futures very easy to grasp.

My suggestion to learn and copy from the design of Futures in Scala. It is the best I have seen across various languages (Scala, C#, TypeScript, Java and C++). Comparing with Scala, Rust futures are lacking (apart from usable type system and exception handling) blocking context, for example. BTW, Is it worth creating a ticket for the blocking context feature?

5 Likes

I completely agree with this observation.The reason async is easier to understand in C# is not because it has async/await keywords. It is because the underlying Tasks model is easy to grasp. The same needs to be true for Rust before Tokio can be called easy to learn.

In case of Tokio the root cause of learning difficulties lies in the core abstraction of future. From my experience, once you understand futures, the rest of Tokio presents no challenge at all.

And the exact problem with future is the implicit thread-local-based design. It makes futures hard to understand because of two reasons:

  1. the implicit aspect is not very apparent to the new user and
  2. the presence of implicitness forces an additional concept of Tasks into the model which wouldn't be required otherwise.

These two complications make the model difficult to get.

One may say better documentation can help with 1. but we shouldn't count on that. Documentation can only be a compliment never a substitute for clear, transparent design. I feel we just need to address the design of futures itself.

There's already an issue open around this which proposes passing the notification receiver as an explicit argument to future::poll(). Maybe as an alternative we could consider a separate method for notification registration, say, future::register(notification_handle).

These may not be the best solutions, but we do need a design change, I feel, that makes notifications explicit and thus makes the behaviour of futures super clear to anyone looking at them. A change like that would also remove the need to learn the nitty gritty of Tasks before you can learn about futures (which is what tokio.rs website currently spends a lot of time explaining). It will make futures easier to use as well because one could then call them from any thread instead of a special thread. It may even result in simplifying APIs built using them.

Just my two cents.

3 Likes

As I suggested, it is worth looking into Scala design of futures. In Scala execution of a future is done by the executor thread pool (which can be only current one, depending on the implementation chosen). And reference to the executor thread pool is either passed explicitly into future combinator functions OR implicitly (in the most cases) by importing the definition of needed executor into the current visibility scope.

Also, as far as I understand Rust currently is restrictive on what and how data is passed and synchronized between threads. It is for a good reason to prevent data races and usage of dangling pointers. What could be done better is Rust could support futures natively as built-in feature. Being futures-aware, Rust could treat encapsulated closures as tasks being executed in a model similar to Actor + MPI model (like Akka in Scala or C#). This would allow to relax constrains, which are currently placed by the static analysis on data being used across multiple threads, and improve checking for data races, improve optimization, scheduling etc.. without requiring programmers worrying about data passing and synchronization mechanisms. Not sure if it is viable idea, but I think it should unlock whole lot of new opportunities (to improve and simplify async programming) if the compiler was futures aware.

While I can see what you mean, becoming "built-in" is currently an explicit non-goal of the future crate, and will be for a while, maybe even forever.
We are only just learning all the cool things that the Rust Type System allows us to do, and it will be years before we, as a community, will have figured out the "best" design.
Even then, the explicit rust philosophy is to keep as much as possible outside the std lib.

Becoming stabilised in libstd effectively freezes code in place, because the strong backward-compatibility promise we wish to keep for decades forbids breaking changes.
Frozen code can not improve anymore, so can no longer benefit from new ideas/better designs.

Thanks to the (built-in) Send and Sync traits, the type system/compiler is already aware of the most important synchronization building blocks.
Other languages/compilers don't have this basic understanding of the building blocks, so they can only understand concurrency if we "build in" the next higher level, i.e. Tasks, Promises, Executors, etc.
(E.g in java, thread safety is only mentioned in the javadoc: "this method is thread safe", if you are lucky, but the JVM cannot read this javadoc)

Because Rust gives us the basic building blocks Send and Sync, libraries can "teach" the compiler about new concurrency models themselves :open_mouth: :smiley: (including ones we haven't invented yet!)

We don't want to tie ourselves to one implementation of futures that we think is "good enough", we want the API to always be the very best it can possibly be, and preferably to have multiple options, so we can always pick the best one.

Decoupling the future crate from libstd allows us to have a "futures 4.0” in Rust 1.0, whereas otherwise we would have to do a big version bump for the entire language, just because one "tiny" built-in (futures, or IO, or maths, or... ), had an idea.
Keeping libstd tiny keeps the core language light as a speedboat, instead of big-and-powerful-but-unwieldy like a battleship (e.g. "batteries included" python or java).

3 Likes

My biggest problem with server workloads is the lack of stable libraries for servers of the most common protocols. Theres still no 1.0 HTTP library, none for websockets, and the ecosystem is pretty barren otherwise.

I think the biggest problem there is a feeling of perfectionism. But it would be perfectly fine if we just had workable, documented libraries in a 1.0 state, that would later probably see a 2.0 or get replaced by something better.

I'm not even talking frameworks here.

7 Likes