Seeking advice for Rust Unit / Integration testing

Hi,

I'm seeking advice from the community regarding software testing with Rust.
Since the Rust book is only demonstrating how to test pure functions, it's not very helpful beyond a beginner use case.

Project: libmicrovmi

I'm writing a dead simple library libmicrovmi, which is translating calls from a unified API to a specific driver.

The library initializes and return a trait object Box<dyn Introspectable>, which is the base of the user facing API.

Each driver is in fact dealing with a specific hypervisor

  • Xen
  • KVM
  • VirtualBox
  • Hyper-V
  • bareflank
  • etc...

At this point, I'm facing issues to test my library, isolating each driver with unit tests, and testing the whole system with integration tests

Unit tests

To explain the issues, I will explain how the Xen driver is initialized:

The driver initialization depends on dynamically loading a set xen specific libraries, each one providing some important API we are dealing with

For each of these libraries, we have written a -sys, and a safe wrapper (linked above), that we are using as a dependencies in the libmicrovmi driver implementation.

Example for the Xen driver:

// open xenstore instance
let xs = Xs::new(XsOpenFlags::ReadOnly)?;
// iterate in the directory to try to find our target vm
for domid_str in xs.directory(XBTransaction::Null, "/local/domain")? { /* */}
// init xenctrl
let mut xc = XenControl::new(None, None, 0)?;
// enable events monitoring
let (_ring_page, back_ring, remote_port) = xc.monitor_enable(cand_domid)?;
// init xen event channel
let xev = XenEventChannel::new(cand_domid, remote_port)?;
// init xen foreign memory
let xen_fgn = XenForeignMem::new()?;
// build final struct
Ok(Xen { } )

My question here is how the rust community is advising to unit test a module that depends on third-party code.
I'm coming from Python, and I'm used to the "Ports and Adapters" pattern.
I would have written an interface for the third-party dependency, and an specific adapter for tests that could fake this API and return specific values.

Should I attempt to implement the same pattern in Rust ?
Should I go on the mock road instead ?
What's the best approach ?

Integration tests

Now going further with integration test, the goal is to test the public facing API, and the library as a whole.
However, the setup is particularly painful here, as it's not about starting a fake HTTP server or a postgresql database, but running on a given hypervisor, with a target virtual machine to introspect.

I was wondering what advice you could give me here.

How to test the whole library on each iteration, without necessarily requiring to interact with the system itself, but maybe by reusing the "Fakes" that we would have built previously for the unit tests here ?

Or should I actually run my integration tests on a the target hypervisor, because that's the goal of integration testing in the end ?

Thank in advance for your suggestions !

1 Like

Tough case! From the description, it seems that the code itself is all about integration. I’d personally would bite the bullet and write horribly ugly, horribly slow integration tests, which span real VMs etc.

You might make tests somewhat easier by adding mocks, adapters or the like, but I fear that it’ll just give false sense of security, without assistance of software correctness. For integration, it often happens that the bug is not in your library, but in the upstream, or in your understanding of the upstream.

Like, given the domain of the library, I imagine bug reports a la “I’ve upgraded xen from 4.10 to 4.11, and I am now seeing panics in the foo method for some reason”. This is not something mock-based test can help with.

1 Like

@matklad I agree with you, the integration tests should test the whole library with the actual hypervisor and VM to reveal what might be broken.
So I get your point.

Coming back to unit tests, do you have any pointers for the problem I mentionned, isolating a module from a third-party dependency ?

without this test unit test harness, I can't make safe iterations on my code.

Thanks !

What you could do for unit tests is something line this:

#[cfg(not(test))]
mod m {
    pub fn  foo() { external_lib::foo() }
}

#[cfg(test)]
mod m {
    pub fn  foo() { mock_impl() }
}


#[cfg(test)]
mod tests {
    #[test]
    fn test_foo() {
        super::m::foo() // guaranteed to use mocked impl
    }
}

But do you actually need this for unit tests? Given the context so far, it's hard for me to see which bugs will pass cargo check but will be surfaced in the mocked cargo test --lib?

I guess, my own strategy would be:

  • setup loads of integration tests
  • try to keep the amount of code in the library as small as possible
  • rely on types to catch simple mistakes without running tests
  • think about testing strategy for the consumers of my library. Not sure if that's feasible, but I'd try to add some simple backend which just works without a lot of setup, so that consumers can test against this backend, and be relatively confident that the code works with a real thing. Sort of how you test on SQLite in Django, but use Postgress in production.
1 Like

But do you actually need this for unit tests? Given the context so far, it's hard for me to see which bugs will pass cargo check but will be surfaced in the mocked cargo test --lib ?

I will need unit tests to make sure that I'm not introducing a breaking change when I modify a driver's implementation, or something in the layer above.

Running integration tests is too heavy, it requires that the hypervisor is present with a VM.
You want to run them at the end of a development cycle, and keep a fast feedback loop while writing new code.

So i gave it a try to write integration tests for KVM:

Since cargo test doesn't handle setup/teardown code and timeout, I had to write a run_test() function to deal with it.

fn run_test<T>(test: T) -> ()
where
    T: Send + 'static,
    T: FnOnce() -> (),
{
    // init env_logger if necessary
    INIT.call_once(|| {
        env_logger::builder().is_test(true).init();
    });
    // setup before test
    setup_test();

    // setup test execution in a thread
    let (done_tx, done_rx) = mpsc::channel();
    let handle = thread::spawn(move || {
        let val = test();
        done_tx.send(()).expect("Unable to send completion signal");
        val
    });

    // wait for test to complete until timeout
    let timeout = Duration::from_secs(TIMEOUT);
    let res = done_rx.recv_timeout(timeout).map(|_| handle.join());
    // cleanup test
    teardown_test();
    // check results
    res.expect("Test timeout").expect("Test panicked");
}

#[cfg(feature = "kvm")]
mod tests {
    #[test]
    #[serial]
    fn test_init_driver() {
        run_test(|| {
              assert_eq!(xxx)
        })
    }
}

Now I'm wondering how I could:

  • share my tests for all my features
  • having a custom setup / teardown functions per feature.

tests/common.rs

    #[test]
    #[serial]
    fn test_pause() {
        run_test(|| {
            let mut drv = init_driver();
            drv.pause().unwrap();
        })
    }

tests/kvm.rs

fn setup_kvm() {}
fn teardown_kvm() {}

#[cfg(feature = "kvm")]
mod tests {
    use crate::tests::common_tests::*;

    // somehow modify run_test to specify custom setup and teardown functions ??
}

Is there a way to achieve that ?

Thanks a lot !

Hey, not sure if this is exactly what you're asking - but looking into this recently I found this post, which uses the panic::catch_unwind function to allow for DRY setup and teardown. I was planning on using this for my project, as you can obviously write many versions which call various different setup and teardowns (per feature, in your case)

#[test]
fn test_something_interesting() {
    run_test(|| {
        let true_or_false = do_the_test();

        assert!(true_or_false);
    })
}
fn run_test<T>(test: T) -> ()
    where T: FnOnce() -> () + panic::UnwindSafe
{
    setup();

    let result = panic::catch_unwind(|| {
        test()
    });

    teardown();

    assert!(result.is_ok())
}

Credit:

See if your library can be exposed as a CLI, and the behaviours can be asserted on top of stdout/stderr/exti_code/modifications to current directory, and input be specified as cli arguments/env variables/stdin etc.

In that case you can try my tool fbt, and see if it helps.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.