I have an application in which a main process forks several other processes. These processes communicate both with the main process and each other over pipes. I have tests for the messages involving the main process, but I'm having trouble figuring out how to use the Rust framework to test the code that depends on communication among the other processes. Is there a simple way to build these tests where I can leverage the Rust testing mechanisms?
Well you could in general either be integration testing, (checking everything works together) where ideally you want to be using automation APIs in the test, or a self test mode in the application where the test is basically "does it return success"; or you're unit testing, where the problem is "what are the units".
Assuming you mean unit tests, in my experience, the smallest useful units in interproc are nearly always something like
- establish a raw (eg byte steam) connection - you can normally just test connecting to yourself in a test, though OS permissions are a thing to think about
- wrapping this in an appropriate generic protocol: request/response, subscription, cross proc resource identity management, error forwarding - very messy to separate out from the other parts if you didn't from the start, but really nice on your code once you do. Specifically here you can normally swap out the underlying connection for something dumb like a memory stream, and then just test that the API calls got through it - assuming that it's not a documented protocol (eg if you're crossing languages)
- the specific message serialization - generally you will be using default serde derive and can trust that it will round trip, but might be something to look at.
- the actual application interface sending messages: depends on your application, but generally you should be able to just implement a mock for your protocol (using a crate, hopefully) and treat it like any other api.
Before any of this though, I would suggest making sure you know how to write async tests. Doing interproc without async is painful, testing it without async is so ugly you'll never be able to tell what you're testing.
Keep in mind, depending on your usage of interproc, doing all of this could be overkill: for a lot of cases "did it run" is fine and you'll never touch it again. But if you will, hopefully this is some help!
Thanks for the suggestions for unit tests, but I'm having a problem with integration tests, something I should have made clear in my original post.
I'm close to a solution. Instead of the main process starting a subprocess with
std::process::Command("target/debug/foo")
I can start it with
std::process::Command("cargo").args(["test", "--bin", "foo"])
The tests in the subprocesses run. The only problem is that the passed/failed report goes to stdout, which I'm using to communicate with the main process. Is there any way to direct that output to stderr without affecting the rest of stdout?
Hmm, that's not the usage I would expect. Lets say you have a non-test code setup something like:
Cargo.toml:
name = "my-app"
src/
main.rs:
...
Command::new("worker")
.stdin(Stdio::piped)
.stdout(Stdio::piped)
.spawn()?;
// interproc code...
bin/
worker.rs:
while std::io::readline(...) { ...; println!(...); }
Then I would expect the integration test to be something like:
test/
interproc.rs:
// whatever args make sense for your host app, or an explicit --self-test if none do.
let status = Command::new("my-app")
.arg("--config")
.arg("test-interproc.config")
.status()?;
assert!(status.success(), "{}", status);
That is, you are testing both the host and worker process all together as they would be used in the real world, with the trade-off that it's hard to get good coverage of different conditions.
There's lower level integration-ish tests, e.g.:
Cargo.toml
src/
main.rs:
fn main() { my_app::start() }
lib.rs:
pub fn start() { ... interproc::connect(...); ... }
worker.rs
test/
interproc.rs:
fn main() {
interproc::connect(...);
}
This wouldn't have problems with tests using stdout either, as the process tree looks like:
shell ->
cargo test ->
target/debug/interproc --some-test-harness-flags (prints test output) ->
worker (piped stdio)
As an alternative, you could move your IPC to instead use a non-standard handle, but unfortunately this isn't supported out of the box, which is sad. There are a lot of IPC crates out that might work for you, though. I built one after a bunch of pain and tears before tokio added native windows named pipes, but it should now be pretty much identical to parity-tokio-ipc
.
An interesting approach, but I don't see how it invokes the cargo test framework on the worker processes. I get that by making the command cargo test ...
.
What I'd like to see on the console is
test result: ok, 3 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in xx.x seconds
for each process. I see that for the main process now, and each worker sends the corresponding line on its stdout, which appears as a message to the main process. I think I'll just have the main process print that out.
Right, what I was saying is that seems like a weird setup and I'm not sure what you're trying to achieve. You want to run the worker as a subprocess that talks to stdin and stdout but also as a test crate. This basically doesn't make sense, not just because the harness prints to stdout, but also because it captures each test stdout, it runs tests in parallel and so on.
The default test harness has some options you can see with cargo test -- --help
, (note the extra --) and its API is documented - sort of - at test - Rust if you're linking against it manually, but honestly I don't think it's going to be very helpful for what it seems like you're trying to do.
In short: the default path for testing is either unit tests, where you isolate the unit, which definitely includes stdio, or integration tests, where the harness is outside the public interface that you're exposing to your users. If you stray from that happy path, then the tooling stops being useful, and you need to start really being explicit about what you're doing for anyone to be able to help. It sounds like you need to build your own test harness, or change the program though.
I think you're right that I will have to build my own test harness. I wanted to make sure there wasn't something already available before taking that on.
This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.