How to harness test docs for custom test reports?

I would like to write in the test documentation certain identifiers to trace tests to other things (e.g. /// ID1 - positive - accepts a valid email) , and would like the test report to show them next to the "OK". Is there a way to do this?

There is no way to alter the output shown by the test framework.

Normally, you'll put this sort of information in the test's name, so you might name that test id1_positive_accepts_a_valid_email().

Thanks!

Is there a way to hack this through e.g. via other test framework? Alternatively, maybe a mechanism to parse the test cases and extract their docs, so that a post-processing step can be used to map the test result to the docs?

It depends how far down the rabbit hole you want to go.

One thing you could do is create an attribute macro that accepts arguments in some arbitrary form (e.g. #[label("id1 - positive - accepts a valid email")]) and set things up so it saves something to an output directory (e.g. $CARGO_TARGET_TMPDIR` for integration tests) that your post-processor can consume. That attribute would provide the nicest experience, but is the most complicated.

A simpler approach is to make sure every test you want to correlate to docs calls a function at the end which does something your post-processor can use.

I know rust-analyzer has this cool testing setup where they'll sprinkle comments through the parsing code, and a code generator will later come along and use the comments to generate test cases that are designed to hit that line. Matklad explained it in his "Explaining Rust Analyser" series on YouTube, but I can't remember which video it was.

Can you elaborate more on why you want to document which code is being tested? It might be possible to achieve a similar result in a different way, for example code coverage tools are able to tell you which lines get executed during tests and how much code in each module is being tested.

I like the idea of the macro. It also allows to select which tests to harness for this purpose, which is useful.

Can you elaborate more on why you want to document which code is being tested?

In GxP validated software (e.g. software supporting submissions to FDA), there is the concept of traceability, that is used to identify which functional requirements are covered by which tests (and how they are covering them).

Assuming that:

  • the functional requirements live in a document and each one has a unique id
  • developers are the best persons to trace tests to them

I am looking for a way to

  • devs write software and point to the spec it is testing, using some convention
  • test runner picks these descriptive elements and places them next to the test report, so that it can be stored as evidence of what was tested (just like a test run report).

Another way of thinking about this is how to surface code documentation on the tests to a test report to provide more context into what the test is doing.

I will try the label macro. Thanks!

You might also be interested in the cov-mark crate. It's a crate rust-analyzer uses during testing to manually see if particular lines of code are hit.

That might make it easier for your test runner to extract those "descriptive elements" instead of relying on a human saying "I'm pretty sure this test exercises X".

It should be possible to create a function which does some_hashmap.insert("some_ident", cov_mark::check!(some_ident)) for every coverage mark in your codebase, and make your custom attribute call that function after the test completes so you have data for your validation software. The function could be kept up-to-date using a self-modifying test which scans your codebase for instances of cov_mark::hit!(some_ident).

I swear I'm not a shill for rust-analyzer, they just do cool stuff :sweat_smile:

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.