Executables for integration testing

If I want to create additional executables as part of a project, I add files into src/bin one file for each executable to be built. This works excellently well.
But say I want to create an executable only for integration testing: the integration tests will ensure the executable is running prior to running integration tests and terminated at the end of integration tests. Having a file in src/bin seems wrong since the executable is not a part of the project it is part of testing the project. As far as I know tests/bin isn't supported for creating additional executable only used for testing.
Has anyone any experience/advice on creating these executables needed to mock services as part of integration testing.

Generally, you'll put an integration test under the tests/ directory. The Test Organization chapter of The Book has an integration tests that explains this in more detail.

If you want to make sure something is running in the background while a test is executed, one pattern you can use is a sort of "guard" type which will start the executable when it is created (e.g. with std::process::Command::spawn()), then stop the executable (child.kill()) when it is dropped. I've done this in the past when I doing integration tests for something that talks to a docker container.

The nuclear option is to create a normal executable under the tests/ directory and tell cargo to not automatically inject Rust's test harness. For example, say your integration test is at tests/foo.rs, you'd need to manually edit Cargo.toml with something like this:

[[test]]
name = "foo"
harness = false
1 Like

OK so having a file in tests with the tests harness not injected is a way of creating an executable, but I suspect the name of the executable includes a hash code appended to the name, not just the name as per the original file. Would this make it impossible to name the executable in a spawn? Also can it be guaranteed that the executable to be spawned is compiled before any of the integration test executables get executed?

Instead of trying to run the executable from target/debug/deps/whatever-asdf1234 you could let cargo do the heavy lifting with something like cargo run --bin whatever. Asking cargo to run the binary will also make sure it's compiled before your integration test.

1 Like

"Why didn't I think of that?" :slight_smile:

Seems to work a treat. Thanks for guiding me to success. Now I just have to find out about fixtures as in Python, PyTest, and Hypothesis but in Rust.

1 Like

Normally if I'm testing a parser I'll put an example file in tests/data/some_example.bin then use include_bytes!() to include the file's contents in the test binary. Kinda like this.

If it needs to be structured data you could always store the fixture as JSON the use serde_json to parse it into a Rust struct.

Sadly it seems that "cargo run --bin mock_avr850" doesn't work with mock_avr850.rs being in directory tests. I have had to move the file into src/bin which works but means the application is built as part of normal build not just for tests.

Putting mock_avr850.rs in the tests/ directory means it'll automatically be executed by cargo test.

It sounds like there's been a misunderstanding somewhere, so I'd recommend having a read through the Test Organisation chapter from The Rust Programming Language that I linked earlier. That explains the various ways you can do testing in with cargo.

1 Like

Following on @Michael-F-Bryan's suggestion, you could try something like this:

Cargo.toml

[dev-dependencies]
scopeguard = "..."

[[test]]
name = "mock_avr850"
harness = false

tests/mock_avr850.rs

/// your binary logic
fn main ()
{
    // ...
}

tests/your_test.rs

#[test]
fn your_test ()
{
    let process =
        ::std::process::Command::new("cargo")
            .arg("test")
            .args(["--", "--exact", ""]) // run no tests, except for the one lacking the test harness
            .spawn()
            .expect("Failed to run the `./tests/mock_avr850.rs` test helper binary")
    ;
    ::scopeguard::defer!({
        let _ = {process}.kill(); // or `.wait()` depending on your use case
    });

    /* your code logic here */
}

Do note that all this stuff with a non-harnessed-test is quite hacky: using some kind of script to ensure a process is running in the background while the tests are running seems like a more robust solution.

2 Likes

@Michael-F-Bryan Playing is really the only way of really trying this out as the book chapter doesn't deal with exactly this situation.

From what I can tell, it is not clear how to ensure mock_avr850.rs runs before all the other tests when it is in tests as per your proposal. Also it is not clear how to end execution of the executable. I shall play a bit more though.

Thanks for your comments, they are very helpful in driving me along.

@Yandros The problem is that tests/mock_avr850.rs will be run anyway, as per @Michael-F-Bryan comment. This use of Command would try to run it again and it may already running, depending on integration test execution order.
However manually starting the mock allows for termination as in your code – though I am using a function that executed closures to handle the stopping and starting so as to avoid duplication.
All this is so much simpler in Python using PyTest. :frowning:

There's a Cargo issue for this:

2 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.