Expect-test: minimalist snapshot testing library

That's a thing we are using in rust-analyzer and (very experimentally) rustc. Some folks might find this useful!

2 Likes

What is the use-case for updating the test dynamically from what the code is versus what you are trying to manually assert? It seems to defeat the purpose of a unit test and that value should've been configured in test setup somewhere to be shared across tests.

If we take a TDD approach, it removes certainty on your refactor stage of the red/green cycle.

(This comment is mostly about the update functionality of the crate; I like the expect macro)

See

for general discussion about expect tests.

In my personal experience, data-driven integrated tests (where both input & output are just data) with snapshot-testing for output are significantly easier to maintain in large projects, and give you more confidence than traditional fine-grained unit tests with manual asserts.

1 Like

That is an interesting tool to complement a testing strategy. Perhaps as a fast way to introduce system and integration tests earlier on a project, before such tests become part of the architecture.

Relying completely or even mostly on auto-generated test would give a false sense of confidence. As no-one would be really checking in the first place if the code is doing what is supposed to do.

All I can say that significant parts of rust-analyzer and IntelliJ Rust are tested in similar ways, and that this strategy works better for me than alternatives I tried in that context. Naturally, your mileage may vary.

I'll just mention my latex_snippets package as a nice example for this use case. The package generates HTML code, and when writing tests I can never manually determine the precise string that should be generated, and it doesn't really matter what the exact string is, since whitespace doesn't usually matter in HTML. So in practice before attempting to use expect-test, I just manually adjusted the string until it matched the actual output. And also checked that it was correct. expect-test makes this process way easier, plus gives me usable diffs when there is a difference, which makes it easy to see whether it's a meaningless whitespace difference or an actual error.

Here's an example of using expect-test:

https://github.com/droundy/latex_snippet/blob/master/src/tests.rs#L650

I'll point out that you don't need to generate the expected output from your code, but for more complicated output, trying to type it out manually is just so painful and error-prone that I find myself seldom doing so.

The technically correct way to test my code would be to write (or identify) an HTML parser, and then test that the latex converts to HTML code that parses to the correct page. But that would presumably be very hard, and it's much easier to test that the actual string output doesn't change, and then modify the test if the output changes in a way that is still correct.

1 Like

Note that, if you use the expect_file macro, you can store .html in a separate file and view it in the browser.

We do this for syntax highlighting tests:

I've worked with this kind of tests in the past and the key part is that you can inspect the diffs of the tests after you automatically update them to match the output of your program or your function.

The first run serves to fix the assertions and subsequent changes can then be inspected as part of the normal code review. If the tests change in strange ways, you catch it the same way as if the test fails in an unexpected way. The advantage here is simply that you get a helping hand when the expected output does change because your algorithm has changed in some way.

That's really clever! I'm not sure it'd work for most of my tests, which are just fragments, but it's worth keeping in mind certainly. And I guess browsers probably display HTML fragments pretty well...

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.