I really like the cargo test feature. I've read/heard people complain that it's too bare bones, but I think that's a big part of why I think it's so good -- it just differentiates between things that go "boom!" and things that don't, and that's basically it.
With that said -- I've run into a situation where the the love of the bare bonedness has come back to bite me.
I have a some tests that need a test server to run. What I've done is make it so the tests check for the environment variable TEST_SERVER, and if it is set they run their test, otherwise they just exit cleanly. This works fine. However, I thought I was running the tests at one point, but I has misspelled the environment variable name. This would have been easily caught if the tests would report back as having been skipped.
I read somewhere that there are alternative testing frameworks for Rust, sort of like how criterion can replace cargo bench. Does anyone have any experience with such projects? My needs are very modest -- the only thing I really need is to be able to report that a test has been skipped.
the most popular alternative to cargo test is cargo nextest, but I don't think it helps in this case. the problem is, rust don't have a "test harness" API, you cannot report the result from within a test itself at runtime, the test runner reports the result based on a simple rule: if a test panics (without #[should_panic]), it fails, otherwise, it succeeds. all filtering are done externally.
to be able to report the status at runtime, the testing infrastructure needs big change. hypothetically:
// maybe add a parameter to test functions
// function without parameters fallback to old behavior
#[test]
fn foo(ctx: &TestContext) {
if some_runtime_condition() {
ctx.skip();
}
}
// or maybe like this:
#[test]
fn foo() {
if some_runtime_condition() {
get_current_test().skip();
}
}
for now, maybe you can use a cargo xtask to check the environment variable and set a filter for the tests before running the tests. or, cargo nextest have "profiles", maybe you can run different profiles on different environments?
Sometimes this can be worked around by allowing the tests to be skipped locally but fail in CI, on the assumption that CI is set up to provide all the necessary dependencies for testing.
Apparently it's possible to opt-in to running ignored tests. So my thinking is that I'll simply mark all these tests with #[ignore], make an xtask, as you suggested, that checks for the environment variable (and possibly actually probe the test server), and if it's set, have it run cargo test -- --ignored.
It's a rather blunt solution, unfortunately -- but it has the most important feature I'm looking for (to easily be able to spot if the tests are being skipped/ignored).
All that being said, I do hope the test framework gains the ability to do this properly. The snippets you posted are exactly what I want.
aha, your solution gives me an idea too, a variant of yours, actually. also abusing the "ignore" attribute, but using the build script, instead of using a cargo xtask, the idea is to add the "ignore" attribute conditionally at compile time, something like:
Oooh, this is good. Really good. And it also makes it possible to distinguish between different classes of tests (which was my worry with abusing "ignore").
I don't mind writing more "infrastructure" code, the part that's important is ease of use when it comes to actually running the tests, and this adds basically zero complexity from that perspective.
Note that passing a test filter argument also overrides #[ignore]. So, if you put your tests in a particular module or with a particular word in their name, you can run all of them without running tests ignored for other reasons.
(I see you got another solution ā Iām mentioning this just to spread knowledge of how ignore works.)