I don’t think we can. You could put a new post there either now or when the discussion here has run its course.
It is, very!
I’m not sure either way, but my feeling is that at some point a test runner is going to want an annotation that someone else isn’t going to want (imagine if not every runner supports setup/teardown). There might be a way of cracking this nut, I’m not sure
I think the landscape is shifting here, it will be much more common to add tools using rustup in the future and the lines between what is shipped by default vs what is easily available will blur. While we should make it very low friction to run tests, I think if it is easy enough to install a test runner using rustup, then that would be OK. OTOH, realistically it is much harder to remove something than add something in, so realistically we will probably continue to ship at least what we ship today.
I was replying to acrichto’s points inline. Again, I’m not sure about this, but it seems a consequence of letting the harness choose the attributes.
I don’t think we should throw it away or anything. I share the goal of being able to do benchmarking on stable via a ‘built-in feeling’ mechanism. I just think the way to do that is to stabilise an API and provide an external tool to use it.