Best practices for managing test data


#1

I’m currently trying to work out how to best manage data for integration tests, and I’m wondering if it’s a “solved” problem, or one for which there is community consensus?

I have an application + library that processes audio files, and I would like to be able to run tests with example audio workloads. I’ve struggled so far to find a neat solution. So far, I have tried and been unsiatisfired with:

  • git-lfs to store test data in a data subdirectory
  • downloading data in a cargo build script

The former makes it slow to check out the code, while the latter is brittle and does not guarantee a coupling between the data being present, and the test that requires it.

Are there any better options?


#2

Not sure about better, but here are some ideas:

  • Generate the audio files
    Can this generate realistic data? not sure :confused:

  • Store the test data in a separate repository, and download just the latest revision when cloning (maybe git submodules do this, I haven’t confirmed but they might)
    This is kind of a mix between the two options you’ve mentioned — download during cloning, but not all of the data, and has the added benefit of versioning the data with the code.


#3

It depends on what sort of audio files are expected and processed by the application. If you for example are analyzing music which consists of human singing and the application is checking for things like that, it will be difficult. Audio files with speech will also be difficult.

If you want to test simpler things you could use markov chains and randomly generate some midi perhaps. You can then use https://github.com/altsysrq/proptest to facilitate the testing and shrinking.


#4

I wouldn’t recommend this. It is better to keep tests as “offline” as possible. You don’t want a test failing due to a download failure.


#5

Thanks for suggestions, all.

@azriel91 – unfortunately generating the audio files is out of the question for me, as my library is designed to search for/detect features in real-world audio data, the nature of which artificial tests would not be able to replicate.

Using a git sub-module might be an option. As they are audio files, though, they tend to be quite large. I suppose that might not be so much of a problem if they don’t ever change though!

@dylan.dpc – I completely agree that downloading in a cargo script is non optimal, that’s why I posed the question :wink: I also agree that a test failing due to a download failure is unacceptable. Ideally, I’m looking for a solution that allows me to encode something like the following vague pseudo-code:

if data driven tests enabled: 
    for each test in tests:
        let data = download_data()
        if data = failure: 
            skip test
        otherwise: 
            run_test(test, data)

Using cargo or another tool (e.g. git) to download data outside of the testing environment essentially lifts the download_data() call into a separate loop, rather than integration the testing acquisition into the testing environment.

Maybe a better question would be, does rust have any testing environments that support something like the pseudo-code above?


#6

Another piece of information that might be useful for others who find this thread in the future. There are plans afoot for Rust to support a greater range of test frameworks, which might somewhat solve the problem I pose here. There are more details in this blog post, and the linked RFC: http://blog.jrenner.net/rust/testing/2018/07/19/test-in-2018.html


#7

you can store some pre-generated audio files in the tests folder and use the same files on every build.


#8

Unfortunately, the files are really quite big (on the order of 100mb for some), so I’m loathe to include them in git. I’ve had a play with git-lfs, with some success, but cheap git-lfs hosts are not plentiful.

The files are also publicly available in a very script friendly way, so I’d rather not (essentially) re-host them myself.