How can I group incompatible instrumented unit tests so that they don't clash?

I have some instrumented unit tests with conflicting instrumentation requirements e.g. one requires that a given remote device is connected and another requires that no such device is connected, to make sure I can properly detect the device's presence without false positive or false negative results. These are currently implemented as unit tests for convenience (access to private functions), but could be refactored into integration tests if that would help... but from what I've read in Test Organization - The Rust Programming Language it seems like there is no easy way to run some tests and not others except #[ignore].

Ignore alone doesn't do the trick because in reality there's a matrix of required instrumentation conditions that requires deeper partitioning than just A or (A and B), via --include-ignored. Going back to my detection tests mentioned above as a simple example: I have my #[ignore] on all my instrumented tests since the instrumentation is undefined by default (the test env may or may not have the remote device connected and shouldn't need to worry about it), so cargo test runs fine. However, when I run with --include-ignored, I immediately hit a conflict where one instrumented test needs the device connected to pass and another needs it disconnected. So far I've been working around this using the cargo test test_name_pattern_match -- --include-ignored filter and naming the instrumented tests accordingly to keep them separated, but that seems brittle and precludes running unit tests ignored for a reason unrelated to whatever naming scheme I came up with for a given partition (these simply get filtered out).

Is there a way to cleanly define that I want cargo test to run a defined subset of tests, ideally for arbitrary subsets? Maybe a custom cfg like cargo testA, cargo testB and then accordingly annotated tests? I'm not sure how configurable cargo test is under the hood, and the docs above don't really go into sufficient detail.

In general, this is considered non-ideal. You should try to write your tests so that they are immune to any spooky action at a distance.

But if you still insist in keeping them the way they are, then AFAIK the only way to fix your problem would be to run them in a single thread, if you're using cargo's test harness:

cargo test -- --test-threads=1
2 Likes

How would running sequentially help? I guess if I had the tests wait for input it would give me a chance to swap out the hardware config...

As for improving the design, I can mostly eliminate incompatibilities through hardware abstraction e.g. by having Test A reach out to a networked machine at address A with a given device attached and Test B reach out to a different networked machine at address B that is missing the device of interest. That would keep the tests mostly isolated from indirect complexity, minus all the network comms unknowns.

Anyway, I guess what I'm hoping to find is a concept like test suites that other languages offer; is there any way to tell cargo test to only run tests from a given module?

oh, didn't know you were talking about hardware config dependencies :S. Then I would definitely go with the refactoring approach and abstract away that.

1 Like

ooh ooh I haven't tested it yet rimshot, but galvanic-test looks very promising for deep test organization! My situation really wants for better hw abstraction, however.