Run unit tests in isolated (Docker) container environment

Hi community,

I have the idea / requirement to run unit tests isolated within a Docker container.
First I thought that couldn't be easier than using https://testcontainers.com .

With testcontainers you can spawen containers from within your unit tests.
However you only have the possibility to test against service ports of those containers.
This framework is not meant to run the current test from within the container.

But that is what I imagine: Defining a container within my unit test, that get's spawned and then run the rest of the test code - my assertions and test executions - from within that container.

Has anyone heard of an already working solution?

If not, can anyone tell, where I have to start looking for, when I want to implement this myself?
I assume there must be a possibility to define a hook inside my unit test, where I can steer in which (remote) environment my test code is running in.

I would prefer any solution that works programatically from within the unit test. I would not like to wrap the test execution in a shell script that will setup a container environment around the cargo test execution.

I am excited to read your ideas.

Can you explain a little more about why you want to accomplish this?

A unit test should not require containerisation, because it should already be stateless, idempotent, not touch the outside environment, etc.

When automated tests require inherently stateful things, like database access or integration with a web service, the natural impulse is to spin up the database or web service in a container. If you want to put tests in a container to avoid touching the outside environment, you would still have to put such stateful things in a container as well.

To avoid just X-Y problem'ing you, my initial approach for this would be to place the entire module containing the tests into a container and define the ENTRYPOINT of the image to execute cargo test. But I have not attempted this personally.

1 Like

Spawning a container from within your "unit test" to run the rest of the unit test inside the container sounds inherently against the static nature of Rust. Also sounds like your test suite will take ages to run. I guess you'd have to create an image containing a copy of your project and a generated file that can be compiled to an executable with the your assertions (the part of the "unit test" that does the testing) baked in. Then you'd have to create a container from that image that runs the compiled executable and pipe stdout and stderr and exit status back to the "unit test" that spawned the container. Maybe I've misunderstood your specs and what you are trying to accomplish or maybe I'm not aware of how to do what you want more efficiently, but to me this sounds unusable.

Why? Running your test suite in a container sounds a lot easier than creating an image and spawning a container from within your unit test.

Also X/Y problem: what do you need such a setup for?

@becquerel @jofas thanks for your statements. Maybe you're right and my approach could be inefficient.

However I try to explain what I want to accomplish: I have a (private) function which creates a git2::Repository. With the aid of other functions, it is doing this in following steps:

  1. Creating a valid file name from the Git URL
  2. Locate the OS specific application data directory
  3. Locate the target directory as a combination 2 and 1
  4. If there is already a Git repo at the target location, return Repository::open(target)
  5. If the target directory did not exist, return Repository::clone(target)

Within my unit test I will spawn a Docker container with a local Git repository at a file location outside the application dir. Then I will check if my function properly clones that repo to the application directory location and returns the Repository instance.

By doing this inside a preconfigured, minimal Docker container, I hoped to didn't bother with a setup/teardown function an any side effects that may occur, when testing on the always same machine.

Sounds like this crate could help.

Although to me it sounds like what you are doing is an integration test.

I'd use a temp dir and write all setup code inside the test. But I'd consider it an integration test and put it under tests/.

1 Like

That's the crate I started with, as in my description. However you could only spawn containers to test against it's open ports, not to run test within containers.

Ah sorry, my bad.

Does it make a difference, if I run the test against my (private) functions as integration test?
Would that open an opportunity to run them in self spawned containers?

From a technical standpoint, you can't integration test a private function. From a philosophical standpoint, testing the environment is not considered good practice in unit tests. That's what integration tests are for. To me it sounds like using a container rather than writing setup and teardown routines is a lot more work, without any tangible benefits. Setup and teardown is something common to do in integration tests and if your test suite isn't flaky, you normally don't leave a mess. What are the side effects you are expecting? When working with the filesystem, it is good practice to mock it in the sense that you set the target directory you want to work with to a local directory you created specifically during the setup phase of your test. This way you avoid any potential side effects and bad interactions with other parts of the system.

1 Like

In case you change your mind: Internally, Rust has a remote-test-client and server that we use to run tests for some targets under QEMU, using a Cargo runner configuration. I expect you could do similar with the server running in a container, maybe even reusing those exact tools.

1 Like

It sounds like containers would be overkill for what you want.

Normally, I would use tempfile::TempDir to create a temporary directory and use some sort of extension point to make the code use that directory instead of the global OS-specific application data directory.

The TempDir will take care of cleaning everything up afterwards (even if your test failed and triggered a panic) because it does a rm -rf in its Drop implementation.

The benefit of this approach over containers is that it'll be orders of magnitude faster and doesn't require extra system dependencies like Docker.

1 Like

Thanks for all your thoughts and comments.

I see that what I am doing are no classical unit tests. I also can't run them as integration tests, because that function to test is a private core part of my library.

Mocking is in fact too difficult for me (personally). I just want to provide my test code a given Git repository. Doing this in the test code itself feels too artificial and out of place to me.

@cuviper Your idea sounds interesting. I would look deeper into running tests on a remote container in a separate project. Maybe I can connect it with testcontainers.

For anyone interested in my straightforward solution

Not what I originally wanted, but I came up with following solution, that uses the [#ignore] flag on tests and running those tests in a separate stage. Like ...

#[test]
#[ignore]
/// This test is meant to be run with the './test-ignored.sh' script!
fn get_repository() {
// Check that the target path does NOT exist
assert!(!std::path::Path::new("/root/.local/share/procop/test/README.md").exists());

let repo1 = update_repository("/test.git").expect("Expected not to fail.");

// Test that we have cloned the repo to the new target
assert!(std::path::Path::new("/root/.local/share/procop/test/README.md").exists());
...

Those ignored tests are run separately with a helper script:

#!/usr/bin/env bash

if [[ $* == *--update-image* ]]
then
  docker build -t procop-test -f ./docker-test-image/Dockerfile .
else
  echo "Run with '--update-image' if you wan't to update the test Docker image."
fi

function test {
  echo -e "\n\nTest $1"
  docker run --rm -v "$HOME/.cargo/registry:/usr/local/cargo/registry" -v "$PWD:/usr/src/app" procop-test --lib -- --ignored $1
}

## Run each test in a separate container:

test git::tests::get_repository

Within the test Docker image I can provide all the test data I need in a natural fashion:

FROM alpine/git:latest as git

#
# Test Data 1: A Git Repository
#
RUN mkdir /test && \
    cd /test && \
    git config --global user.name "John Doe" && \
    git config --global user.email johndoe@example.com && \
    git init . && \
    echo "test" > README.md && \
    git add . && \
    git commit -m "test"

FROM rust:1.71.0

# use '/test.git' as target, to provide a valid Git URL
COPY --from=git /test /test.git

WORKDIR /usr/src/app
ENTRYPOINT [ "cargo", "test" ]

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.