Workflow for library development and testing

Hi everyone :wave:

I am new to Rust and it is my first low level language (which I believe a part of my issue stems from). I am currently working on a simple library that parses some JSON files into SQLite and then performs some data analysis on it.

My question is a very general one: How can I learn to write tests and how should the general workflow be when developing a library? Most of my work happens in lib.rs and right now I am using main.rs to call some functions on example files, but I basically get by using .unwrap() and not writing any tests (which are both bad practice to my understanding).
I read the book section about TDD, but I find it hard to apply practically. Are there any resources out there where I can learn how to adapt a better workflow for developing a library? Are there some general tips or guidelines I should follow?

Well, testing in Rust is not very different from any other language. You write unit tests and mark them as such with the #[test] attribute. You write integration tests in the tests folder typically. You run them with cargo test.
The task you have at hand is actually perfectly testable - the output for a given input JSON file is deterministic, so all you have to do is to put them in raw string literals, pass the input through your functions, and compare and assert on the output.

So what is your actual problem:

  • Are you having a hard time coming up with tests, in general?
  • Or are you having a hard time utilizing Rust's test framework since you are used to some other language? (it might help to mention which language you are comfortable in)
  • Or is it something else that I completely missed?

Thanks for the response.

Are you having a hard time coming up with tests, in general?

Yes, this is definitely the point. So far I have mostly written one-off data analysis scripts in R or Julia, so testing is still somewhat alien to me. I usually only did some basic correctness tests in the REPL for example.

I was wondering if there are any good general (language agnostic) resources to learn more about testing and its role in a compiled language. The best case I was hoping for is that there may be some resources where this is explained using Rust, so I can pick up more knowledge about the Rust testing framework along the way and apply the concept immediately.

Tbh, coming up with good tests is somewhat of an art which is acquired over time. I'd suggest you to not get too stressed in the beginning but come up with simple input cases, manually work out the output and write a test to match the two.
The main guideline to testing lies in something known as coverage, which is basically a report of all the "paths" of your code visited by your test. A coverage report will tell you which lines of your code have been, so to speak, used by your test. One aim of writing good tests is to improve coverage. On recent versions of Rust, you can get very accurate and fast coverage results with the builtin LLVM coverage tool - read more here. The page I linked also tells you how to view your results. You can also use something like Codecov and some CI runner like Github CI or Gitlab CI (both are free for public projects) to do automatic coverage after every commit.

2 Likes

This isn't directly related to your initial question, but I thought I'd address your "which are both bad practice to my understanding" comment. Once you understand the mentality that cause something to be labeled a "best practice" or "bad practice", it'll give you the tools to know when you need to write a test and when it's unnecessary.

The main reason why people frown on unwrap() in production code is because it's like saying

I've got the output from some fallible operation (Result<T, Error>), but just give me the result (T) and crash the program if the operation failed.

For most one-off data analysis scripts this approach is perfectly valid - you have complete control over the inputs and are watching the code run so you can fix bugs immediately and re-run it, and the priority is to get results quickly instead of handling errors. On the other hand, it would be kinda bad if a web server or library used in production were to crash the moment it encountered some input it didn't like.

Similarly, you don't normally need to write tests for data analysis code because the fact that you watched it run to completion without crashing is usually a good indicator that your code doesn't crash. You also tend to visualise the results and a human will be able to look at the graph/numbers/whatever and use their domain knowledge to say "hmm, that doesn't look right". Most people would refer to this as "manual testing".

However, for something like a web server it often isn't practical to manually run the code on known inputs and check that the result makes sense (there may be no visualisation for you to check, the code might have side-effects like writing to a database, there might be lots of different cases to check, etc.), so you'll write pieces of code to do the checking for you and then re-run these tests every time you change some code.

When writing tests, you need to balance thoroughness with expediency. I could spend 8 hours writing enough tests that my 1000 line codebase has 100% test coverage and be 99.9% confident it is bug-free, or I could spend 30 minutes testing the happy path and a couple of the most common edge cases and be 95% confident[1]. If this code is running a life support system then I'd hope you have 100% test coverage, but you can probably afford to be less thorough if you don't have much time and know the code isn't very important or will be rewritten in a month.


  1. I could also spend 0 minutes writing 0 tests and have no idea if my code works, but that's probably not useful considering you want to learn more about testing. ↩ī¸Ž

For most one-off data analysis scripts this approach is perfectly valid - you have complete control over the inputs and are watching the code run so you can fix bugs immediately and re-run it, and the priority is to get results quickly instead of handling errors. On the other hand, it would be kinda bad if a web server or library used in production were to crash the moment it encountered some input it didn't like.

Similarly, you don't normally need to write tests for data analysis code because the fact that you watched it run to completion without crashing is usually a good indicator that your code doesn't crash. You also tend to visualise the results and a human will be able to look at the graph/numbers/whatever and use their domain knowledge to say "hmm, that doesn't look right" . Most people would refer to this as "manual testing".

This put things into perspective very nicely and I agree that I do not see downsides for using unwrap() in these cases. I started to worry about testing, because I want other people to be able to use my library for some kind of analysis, so the functions should be correct and be prepared for bad user input.

However, for something like a web server it often isn't practical to manually run the code on known inputs and check that the result makes sense (there may be no visualisation for you to check, the code might have side-effects like writing to a database, there might be lots of different cases to check, etc.), so you'll write pieces of code to do the checking for you and then re-run these tests every time you change some code.

When writing tests, you need to balance thoroughness with expediency. I could spend 8 hours writing enough tests that my 1000 line codebase has 100% test coverage and be 99.9% confident it is bug-free, or I could spend 30 minutes testing the happy path and a couple of the most common edge cases and be 95% confident

. If this code is running a life support system then I'd hope you have 100% test coverage, but you can probably afford to be less thorough if you don't have much time and know the code isn't very important or will be rewritten in a month.

This makes sense, too! Maybe one more stupid follow up question: When writing a library people usually do not rely on main at all during development? Maybe just using it at the end to check if the public functions perform according to expectation on some real input? Coming from a non-cs background this seems like magic to me: Developing some code without ever running 'the whole thing' (but I can see how it may work now :grimacing: )

Libraries don't have a main() function because they are often part of a larger application and not meant to be executed directly from the command-line.

However, that's not to say you never run the whole thing! After all, the entire point of writing an integration test is to run your library's code from top to bottom.

It's just that instead of calling the function main() and running it with cargo run, you'll call it something like check_population_statistics_against_2022_census_data(), slap a #[test] attribute on top of it, and save the code under your tests/ folder so it gets picked up by cargo test.

You might also want to check out the Test Organisation chapter from The Book, if you haven't already.

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.