Past, present, and future for Rust testing


Moved to


Hey, thanks for writing all that it is very informative! I totally agree that Rust testing needs an overhaul and I’m keen to see it on the 2018 roadmap. I have not, however, thought much about the specifics.

Some random thoughts:

  • procedural macros/plugins are a bad fit for testing. A test harness should be a tool which interacts with the compiler and Cargo some how, it should not be a code transformation. We should think in terms of what the correct APIs in rustc and Cargo are for making custom test harnesses trivial to use.
  • #[test] is not flexible enough, we probably need more attributes if we’re going to do things like setup/teardown. Compiling down to just using #[test] seems nice in theory, but in practice I think it will make it hard to get good error messages, etc. It might be that we use tool attributes (e.g, #[rustfmt::skip]) or we might want to allow test as a prefix for test-specific attributes (while allowing the individual test tools to decide what to read)
  • this model does not support the division between test harness and test framework, afaict
  • bench should be removed, not stabilised. A benchmarking tool should exist and we might need to add something to the compiler to support that, but it is a mistake to have the whole thing baked in.
  • we should think about the long-term situation for #[test] and libtest - should it exist as a ‘basic’ test framework? Should we try and make it the default test tool and try and make it awesome? Should we deprecate it?
  • we should carefully design the testing API so doctests can Just Work without too much special casing. I fear having too much of a parallel system here.

This post might be better off on internals.r-l.o


Ah, you’re right, internals.r-l.o would be a better fit. It doesn’t look like I can move it though? Can you/someone else?

The post wasn’t so much written as a concrete proposal, but more as an amalgamation of the various proposals and ideas that have been floating around, and as a starting point for discussion (specifically for the work week — @steveklabnik said a post like this might be useful). Most of the ideas outlined in that post are not mine :slight_smile:

I think the people who made some of the suggestions you refer to are better suited to answer to them, though I do have thoughts on some of them:

  • I agree with you that procedural macros are probably not the right path forward for testing.
  • I do think that there is value in being able to provide tests using the #[test] annotation like today with a custom test runner. And especially with a custom formatter. I also like the idea of something along the lines of #[test::setup] and #[test::teardown]. However, contrary to your point (iiuc), I think the annotations (or at least most of them) should not be tied to a particular runner. Sure, some runner may support additional annotations, but I think we’d want to settle on a small set of common annotations that most runners are likely to want, to lower the friction of switching test runners.
    • Following on from this, to one of your later points, I don’t know what the right answer is to what should be shipped with Rust by default I think it would be a mistake to not supply a default test runner, as it’s something that is super handy to just have working immediately, and I think it really helps encourage new Rustaceans to write tests. What exactly that default runner should look like is less clear though. I think ideally it should be maintained outside of rust-lang/rust, but be shipped with a default Rust install (maybe with rustup if possible) and include support for #[test] and setup/teardown, and then I think it could grow awesome on its own.
  • I’m not entirely sure what distinction you want to draw between test harness and test framework?
  • I am also in favor of removing bench rather than stabilizing it. However, I think at least @brson is opposed to this, and would prefer it be stabilized in its current form (see
  • Totally agree with the API ideally Just Working with doctests.


I don’t think we can. You could put a new post there either now or when the discussion here has run its course.

It is, very!

I’m not sure either way, but my feeling is that at some point a test runner is going to want an annotation that someone else isn’t going to want (imagine if not every runner supports setup/teardown). There might be a way of cracking this nut, I’m not sure

I think the landscape is shifting here, it will be much more common to add tools using rustup in the future and the lines between what is shipped by default vs what is easily available will blur. While we should make it very low friction to run tests, I think if it is easy enough to install a test runner using rustup, then that would be OK. OTOH, realistically it is much harder to remove something than add something in, so realistically we will probably continue to ship at least what we ship today.

I was replying to acrichto’s points inline. Again, I’m not sure about this, but it seems a consequence of letting the harness choose the attributes.

I don’t think we should throw it away or anything. I share the goal of being able to do benchmarking on stable via a ‘built-in feeling’ mechanism. I just think the way to do that is to stabilise an API and provide an external tool to use it.


I moved the post and cross-linked them. Will reply there.

closed #6