Rust stability in 2019


The last point being ecosystem fragmentation, is there any initiative to move some external crate in the stdlib like HTTP stack to avoid having each framework using it’s own types ?



No. But there is an initiative to centralize types common to HTTP related crates here:

If you’re going to use Rust productively, you’ll have to find a way to get on board with how its ecosystem is structured, or otherwise write what you need in house. The standard library is not going to suddenly flip its script and start bundling a bunch of stuff. Maybe the future might include something like a Rust distribution with a set of blessed crates, but there are no plans for that currently, and the last time it was seriously proposed, the community rejected it pretty strenuously.



Rust is one of the most friendly languages there is. Unfortunately the term is too vague to mean the same to everyone. Rust is great at producing errors at compile time, down to use of static analysis. Unfortunately many see this as slow progress but in reality it leads to bug free code in shorter time.

Release notes (and blogs best overall place for what is being added.) You have to specify what “ambiguous / complex” your referring to otherwise will never get anything constructive back.

The standard library is staying minimal. For small items there generally is the single purpose crate. It is not necessary a bad thing to have alternatives for larger libraries (eg gui.)



You can’t predict this reliably about anything.

You want two things fundamentally incompatible: code should work indefinitely and unnecessary functionality should be removed over time. Rust chooses the former.



*Breathe in, breathe out.*

I have code more recent than that that doesn’t compile on recent compilers. Rust is stable to a point, but it has broken backward compatibility on multiple occasions in the past. Specifically, if something is unsound, or impairs the language’s ability to evolve further, it will probably get broken. I am personally familiar with this.

For (at a guess) 99% of code, Rust is stable, but I really wish people would stop insisting Rust doesn’t break back compat. All it does is set unreasonable expectations that will infuriate people writing that 1%.

I realise you’re mostly asking about language additions, but I wanted to address the backward compatibility aspect of this.

In practice, and in my experience, the stability problems mostly come from the ecosystem. Some authors take a… debatably liberal interpretation of semantic versioning. As a result, the more you “ride the wave” of compiler and library updates, the less problems you’re likely to see. If you want to minimise churn, then be prepared for problems.

As an example, I have a project where I can’t update my dependencies because someone in the tangled mess has broken semver, and when I update to “compatible” versions, the project no longer builds. Wheeee.

Compared to other environments, Rust is pretty good about this. Between rustup and lock files, you can fairly effectively isolate a project from outside changes, which at least lets you control when disruption happens.

Just take the claims that everything is rainbows and puppies with a decent amount of salt.

Oh, man, the introduction of ? was less than enjoyable. Overnight, I had multiple dependencies stop building on the version of the compiler I supported.

I don’t think this is unique to Rust, but I have noticed a tendency toward magpie-ing new features.

Just to reiterate what’s been said before: crates written for one edition can use and be used by code written for different editions. Newer editions can remove/simplify features. As a result, Rust has an (in my experience) unique way of culling itself over time without putting itself in a “Python 3” situation.

Now, if we could just add the language version to the manifest, it’d be even better… but on the whole, I’m fairly positive about this aspect.

Given the changes between the 2015 and 2018 editions… I would say that about 95-98%% of the code should mean the same thing over that time span. Idioms will likely change more significantly… maybe 10-20% of the 5 year-old code would be written differently if written again from scratch. e.g. pattern ergonomics, the dyn keyword, the '_ lifetime, ?.

But we can’t be certain of that until, say, some time in 2020. :slight_smile:

Overall, I think things could definitely be better, but I don’t believe the situation is anywhere near as dire as you seem to fear.



Everyone here is very aware of what C++ has become, so this fate is considered and actively avoided.

The current state of fragmentation is:

  • try!() macro has been adopted as ? syntax. Some projects still use try!(), but both syntaxes work. There’s no need to change, but if you want, it can be converted automatically.

  • Box<Trait> has got clearer syntax as Box<dyn Trait>. Some code still uses the less obvious syntax. Both work. There’s no need to change, but you can with cargo fix --edition-idioms.

  • The experimental futures crate 0.1 will become obsolete when Future type is added to stdlib. There’s a plan to make adapter for backwards compatibility. There will be some annoying churn as projects switch, but the big upside is that it will work with the async/await syntax that is soooo much easier to work with than the raw futures crate.

  • trim_left_matches was renamed to trim_start_matches. The old method still works, but shows a warning. You can silence the warning or use cargo fix to rename.

  • modules changed, with addition of crate:: to paths. Modules were super confusing to new users and paths seemed to be inconsistent. Old code still works, but can be auto-converted with cargo fix --edition.

There were other changes, like making ref de-facto optional and dropping of useless Error::description(), but these changes essentially removed things from the language.

The additions on the horizon are:

  • async/await that are relatively big, but also very desirable for networking code. Currently when you use async code you have to fight with the borrow checker a lot, since use of references and local variables is limited. This change will allow networking code to look more like in Go or JS.

  • const generics. Currently any generic types depending on numbers (like array length or dimensions of a matrix) are either impossible, overcomplicated or buggy. So that’s more a language-level bug fix than a new feature.

And that’s all I can think of for 2015-2019. We’ll see how the switch to new futures will work out, but for other features it comes down to running a tool once a year, and even that is optional. You can do nothing if you don’t care about newer features.


TWiR quote of the week

Our semver policy is explicitly defined in RFC 1122. Call it what you will, but we have explicitly said that soundness fixes, inference breakage, and some other bugs are things we are allowed to do.

On Rust 2018 you’ll need to write r#try!(..). The try! macro is totally deprecated.

If you can call introducing const A: B a bug fix then I can claim that any new language change is a bugfix. I don’t think it’s tenable to call const generics a bug fix.

1 Like


I don’t think Daniel was trying to imply otherwise - it’s still worth stating that API-breaking soundness issues are still breaking code today, regardless what the reason is.



Sure, “bug in the language” is hangs on very subjective judgement whether something is wrong or merely missing/different by design.
But I wanted to highlight that adding of const generics would make things like derive Debug just work for any array — as one would expect. Compared to that, a derive that works only for arbitrary limit of 32 elements doesn’t look like a good feature or a more elegant design.



I didn’t imply Daniel did; I was providing context :wink: What constitutes a major breaking change according to semver is not necessarily the same as “broke my code”; in our case, it is defined by several official policy documents.

If you said,, or even were bug fixes I would agree. Const generics, while adding neccessary power and fixing inconsistencies, require massive changes to the rustc codebase.



Thank you all for these kind responses and different points of views.

I think the best thing I can do now is giving it a try for some months.

If some maintainers hear me, as a newcomer, the thing I miss the most coming from go is a mature and stable ecosystem, with a rich stdlib (HTTP, only ~10 checks on stable for rustfmt, testing framework not available in stable…) where everyone build on a common foundation instead of ‘reinventing the wheel’ every time.

Examples taken from the past and the present:

  • should I error_chain or should I failure ?
  • should I serde or should I rustc-serialize ?
  • should I time or should I chrono ?
  • should I url::Url or should I http::uri::Uri ?

Enterprises need stability and guarantees and so Rust should provide to win users.

I now understand better the role of editions, but despite what have been said, I think that having nightly and a too small stdlib create a real fragmentation among the community and the projects.

That being said is it practicable, in an application backend, to have multiple services, some targeting stable, some nightly, or does it seems too much headache ?



Yes! In my opinion and experience, this is an important point and a point that is often glossed over.

Crate authors often expect you to use a recent version of the compiler, and this has again and again meant that my code “randomly” failed to compile. Random in the sense that it worked in CI yesterday, but today it fails despite the fact that there was no changes to my code and no changes to the compiler.

So where was the change? It was of course in the dependencies, which no longer support the version of the compiler I happen to use. Cargo can use a lock file, yes, but libraries are not expected to commit a lock file (last I checked).

I know people are working on this, so it’ll probably become better over time. I also hope people will begin treating a change to the minimum supported rustc version as something that requires at a major version bump. That way, a dependency on foobar = "1.2.3" won’t suddenly install foobar version 2.0.0, which might require a newer version of the compiler.

I believe the situation here is similar to above: what you say is absolute correct, but only if I use a recent compiler. If I introduce a dependency on a Rust 2018 edition crate, then I also introduce a dependency on a compiler that is new enough to know about that edition.

So while I can keep using the Rust 2015 syntax, I must now use a recent version of the compiler. People never seem to mention this since it’s super easy to install Rust using rustup.

However, while rustup is fine on a developer laptop, I doubt that it’ll fly in a corporate environment where tools are centrally managed and audited. There it becomes important what RHEL is shipping, and as such, it ought to be normal for users to use a compiler that is a few years old. Rust is still new and young, and the mentality I see is that people are still enjoying that freedom to continuously push the entire ecosystem towards the latest compiler releases.

I hope this will settle down with Rust 2018: crates can start depending on Rust 1.31.0 which introduced the 2018 edition. If they do that and avoid depending on anything later, then we’ll finally have the ecosystem stability I hope for. Later, crates can then decide to jump to Rust 2020 or whatever the next edition is called.

1 Like


If you update your code, you will (of course) also start requiring Rust 1.30.0 where trim_start_matches was introduced in the first place. This now means that all downstream dependencies must also upgrade their compiler. Again, this is probably fine when you know all the code that depends on you, or when everybody is using rustup, but I don’t think it’s a good long-term strategy.



Aside: that is a major version bump. The major version number is the first non-zero component, then minor, then revision. The only way to stop what you’re describing is a major version bump, not a minor one. But people generally don’t like major bumps, because it causes friction (see the fallout from the last time libc bumped its major version).

This is why Cargo desperately needs to take the language/compiler version into account when resolving dependencies.



Thanks, I’ve corrected my post!

1 Like


Yes, hopefully the Rust team will make decision on this RFC soon:

Considering const generics and async/await I highly doubt it will happen.



Requiring an always-up-to-date compiler, as fresh as 6 weeks, is shocking compared to C projects that support 20-year-old compilers. Maybe that makes Rust appear unstable?

It’s a dramatically different approach, but IMHO it works well enough. Upgrades are easy and backwards-compatible enough that it’s not unreasonable to ask users to run rustup update once in a while.



Yes, I think it does, and I’m not even used to working in C :smiley:

Instead I used Python a lot, and there projects would also declare that they’re compatible with “Python 2.7” or “Python 3.4+” or something similar. The difference is that Python releases come about once a year, not every six weeks.

I plan to update my own crates to the Rust 2018 edition at some point and then hopefully be done with it until the next edition comes out. That should give people a solid foundation in case they want to depend on my crates.



I’m not sure what your parenthetical means. Are those items you think are lacking in Rust? What are “checks on stable”, and what testing framework isn’t available on the stable channel? (The standard Rust test functionality has been stable since 1.0, I believe.)



From the opposite point of view, “stability” is just a fancy word for “stagnation”. A stable product cannot evolve. The example given with C projects that support 20-year-old compilers is perfect, because it invariably involves several layers of preprocessor directive and boilerplate that has nothing to do with the application’s business logic.

Similarly, Python 2 is stagnant (or even EoL, depending on your perspective). Projects written for Python 3 typically contain their own set of exception handlers and conditions to remain backward-compatible with Python 2. Or they depend on a library like six that implements an abstraction layer over the standard library.

What it really comes down to is “dependency management is hard.” Perhaps the cleanest approach to avoiding the problem altogether is to depend on fewer crates; Do you really need to depend on a crate that can add whitespace to the head of a string? This is of course an extreme example, but it’s a hilarious one. It is not always possible to write useful code without depending on something.

Side note: The core idea behind culling dependencies is practical for many other reasons. Including reducing code surface area/security attack vectors, minimizing compile times/download times/executable size/memory utilization/etc., and overall just keeping it simple, stupid.