I currently have a discussion going on a crate around adding MSRV (rust-version in Cargo.toml) to the project. I am in favor of doing so, but unsure of the criteria/considerations I should make in determining it. I am aware of tools such as cargo-msrv, which I have used but actually led me to this question.
In my local checkout of the project I had an outdated Cargo.lock; running 'cargo msrv find yielded an MSRV of 1.63.0. The PR contributor had done the same, and came up with 1.67.1. On investigation, when I ran cargo update (with no change to Cargo.toml dependency specification) and repeated cargo msrv, I agreed with the PR author at 1.67.1. A dependency (time in this case) had a release that was semver compatible with my Cargo.toml, but raised the MSRV, so transitively cargo msrv reported different results depending on the (semver-compatible) version of time in the local Cargo.lock. By my understanding, there is not a common consensus whether MSRV should be a factor in semver compatability, so this behavior seems acceptable.
However, from the library crate's perspective, they may set an MSRV that ends up being violated by a downstream depdency releasing a semver compatible update that increases the MSRV. I would like to specify the "truly minimal" MSRV here (1.63.0), in the sense there is a set of semver compatible dependencies of the library that can satisfy the MSRV. But then it seems the onus falls on the end consumer to set the precise version of downstream dependencies to allow building under their cargo toolchain? AFAICT, cargo will not (by default?) try to find the minimal semver compatible version of dependencies when also considering MSRV?
So should I be trying to find this "truly minimal" MSRV (1.63.0 in this case) and ignore the fact that downstream dependencies could make my MSRV "outdated"? Does tooling provide more support here than I've discovered?
You could version control cargo.lock, then the MSRV would be whatever matches the cargo.lock. See FAQ - The Cargo Book for further discussion on the topic.
Also, consider that one approach to MSRV is to pick it fairly arbitrarily ASAP, rather than spending effort on determining the absolute minimum. Then, as time passes and the Rust version you picked gets older, your “number of Rust versions supported” automatically gets bigger. So, precision only matters in the short term. And you can accept a PR lowering your MSRV if someone does the research on how feasible using it is.
I don't think having a high MSRV is a big deal. I don't get why people expect to be able use an ancient compiler with new libraries or binaries. And no one has been able to explain that in a satisfactory way. It seems to all boil down to BECAUSE!! in the end. Which is not an argument on technical merits.
Some leeway makes sense, sure (because the world is messy and complicated). So I would go for N-2 as the policy. But it isn't something I worry overly much about.
I generally agree, although from time to time I do discover exceptions.
For example, ABI of WASM has changed around Rust 1.76, and there's an ongoing migration to new WASI targets and new bindgen. I know of projects where this is complicating deployment (mixing packages from different Rust versions causes crashes), and they're going to need more time to update everything.
The ecosystem seems to reliably support "latest - 3 releases", with rare exceptions where some crate immediately jumps onto the latest Rust (this does annoy people).
In early days after 1.0, Rust used to land very important improvements regularly. I think these days they're mainly nice-to-haves, so aiming for a slightly older version isn't a major inconvenience.
This is my view. time has an N-2 policy for public changes and N-4 for internal. In reality, I haven't bumped the MSRV for a significant period of time because it's simply unnecessary — the new features don't carry their weight. So I've been keeping it at 1.65 for a long time, though I do intend to bump it Q1 of next year to take advantage of things like core::error::Error, inline const, and const floating point arithmetic.
Not everybody uses the compiler from rustup. When it comes to using rust in the Linux kernel, being able to say "you can use the rustc from your distro package manager" is an incredibly big deal that lots of the C kernel folks care a lot about.
It can also be a problem in big companies. They will generally have their own toolchain, which means they'll often be a bit behind. Sometimes there's something that makes it a lot of work to upgrade rustc. For example, if you want your clang and rustc to use the same LLVM version (required for LTO), that can involve complicated issues. Or maybe you have a bugfix in the compiler that needs to be updated because the internals changed. A N-2 policy is nothing; you'll fall two releases behind in almost no time.
That is fine, but then why do they expect to use newer versions of dependencies, rather than period accurate dependencies. I have yet to hear anything other than BECAUSE for this.
I have nothing against staying on an old version, it is the mix and match and expect things to work that I do not understand.
Sure, that is their personal reason. What I'm trying to say "why would anyone expect that to work (past a fairly short time horizon)".
Given that most open source is done as hobby projects for fun in people's spare time, it doesn't seem reasonable to impose an expectation of support for systems that lag behind. (If you are being paid for it the calculus changes, but apart from a few high profile projects most open source isn't paid for.)
Remember, you are given open source for free. You get the code according to whatever the license is. That's it. The maintainer have no obligations to you other than "it shouldn't be actively malicious".
In that context N-2 seems like perfectly reasonable. If what they want to do is use some new feature of the language, it is their prerogative.
(It is also your prerogative to fork or not use said piece of software if you disagree with upstream for whatever reason.)
Also, from my day job I can say that keeping up is not that hard for a large commercial code base. I spend on average a couple of hours per months to update things.
Make sure integration tests have good coverage. You want/need this anyway to find bugs before they hit production and to reduce the need for expensive manual testing.
For example: We have a few thousand system tests that run on a few dozens configurations (context: control software for industrial vehicles, a configuration here would be specific model/skew with specific optional features installed or not). Not all tests are valid for all configurations so we end up with about 15k individual tests being run. Each test takes a minute to a few minutes. This whole test suite can be run in the cloud in about 2.5 hours (given how many nodes we currently paralleise on).
I expect most software doesn't need this level of testing: If you deploy a web app you don't need dozens of different skews, cutting down significantly on the combinatorial explosion. This would make the testing cheaper, easier and quicker.
Set up dependabot/renovate or similar to run weekly so you get notified early about breakages. Since this runs your integration tests, you can be fairly certain that you will find issues that affect your code.
I mostly approach this problem from the perspective of a maintainer. I maintain Tokio, which has a very strong MSRV policy. For people who can't keep up with our MSRV, we also have LTS releases that continue to receive backports for security issues.
Why do we do all that? Because it improves the experience of our users, particularly two groups:
New users who have just started using Rust. They probably use their distro rustc.
Big companies who are behind with their internal toolchain.
The simple reality is that choosing to have a greater MSRV hurts these users' experience of using Tokio.
Sure, it's the maintainer's choice to make. But it is a choice where you are making a tradeoff in favor of yourself over your users.
Compare it to writing documentation. You certainly don't have to write documentation for your library, and people can't expect it of you either. Some library authors will not enjoy it and consider it a chore. Yet, having good docs is still a best practice for libraries, and lots of libraries make the choice of writing good docs. How is it different?
It's worth thinking about the proportionality involved. How big is the inconvenience to you, compared to the inconvenience to your users? In some cases, the MSRV bump makes a big difference to you, and so the bump makes sense. But I've really also seen cases where people have bumped / attempted to bump an MSRV for completely trivial things.
As a concrete example of a triviality, I've had people ask to bump the Tokio MSRV so they could use let/else syntax in one place.
Internal changes only provided a benefit to me, so I don't mind delaying some. Public changes provide a benefit to users of the crate, so I'm comfortable pushing it out sooner. Users won't be happy if the MSRV is bumped with no apparent reason.