Automation of test creation, MACRO or?

Hi,

I would like to create test from features in some file.
I wonder if I have to create a kind of macro that generates the code of the test and launch it directly when use the cargo test command or if it already exists a way to generate test when cargo test is used ?

I can have a function to get all the feature and launch test one by one... But I want to have the possibility to launch test the same way when we launch them with cargo test command.
Like if test was created in tests.rs file and we launch them the same way as usual.

Could you elaborate a bit on what you mean by "features", or give an example?

I don't know if that's what you're talking about, but in my unit tests, I often create a tests vector with tuples of stimuli / parameters / expected results, and I run them in a loop.

In some cases, I don't assert_eq! directly in the loop, but I count the errors and show them so I can have a bigger picture of the current status as I develop the code to make the test pass (using TDD methodology).

When it's a visual test, I print the result in compilable format, check that it's correct, and inject that output in the test array as expected result so that it's checked next time and doesn't regress. It's usually simple to do and as close to "automated test creation" as I'm comfortable with.

Each of those test functions is usually crafted, since it's my next failing test to drive the development. I don't really see the point in creating them by a macro, but I'm not sure what you mean.

By default in rust, each test is launched in a single thread.
I want to create files with some data (like the API request to launch and the expected value I have to receive).
Then, I want for each file to launch the request. Some test can panic, and as the message given is good, I want to keep this possibility to panic a test as soon that there is a problem with the specified message. But that the others tests customed from others files that are in other thread can continue.

In my file, I want to keep the possiblity to indicate that I want to ignore a test, etc...
So, it is like in place or writing a loop that manage thread, I want to generate the code of the test before to launch it the same way I could have write it, but with automation...

In future, I can use benchmarck etc... to conserve the message given natively by rust at the end of tests...

Example : 
test result: FAILED. 6 passed; 3 failed; 1 ignored; 0 measured; 0 filtered out; finished in 0.60s

My intend is to keep the most native management of test in fact. But don't write them one by one...

Can you give an example as to what those files and data would look like?

Depending on their format, there might be one or several ways to go about it. Without knowing anything about the kind of "files" and/or "data" you expect your tests to built from, though - I'm not sure how/if we can help you here, to begin with.

So instead of tests that look like the code below, you'd replace the let tests with something that either parses a file or is generated from a macro (with a file as macro parameter). Is that what you mean?

#[test]
fn test() {
    let tests = vec![
        ( /* parameters test */, /* expected result */),
        ( /* parameters test */, /* expected result */),
        ( /* parameters test */, /* expected result */),
    ];
    for (index, (parameter1, ..., expected_result)) in tests.into_iter().enumerate() {
        // performs test
        assert_eq!(result, expected_result, "message");
    }
}

I'd say it depends on the formats you're dealing with and the easiness to write the data in a text file vs Rust code. Another possibility is to use declarative macros to write the data in the tests array (or whatever other collection you want to use) using a "human-readable" format—I find that those are often useful to instantiate specific values in the code itself.

The advantage of the code approach is that your input format is verified, that everything is at the same place, and that there is no extra file to create and take care of. It may also be quicker than reading so many files in parallel threads, depending on their size (I assume it's not a factor here). Then you have to create a parser for each type of file.

The advantage of files is the possibility to use them with other tools (again, that depends on what those files and formats are), and possibly an easier-to-read format (though if you need to write the parser for those data, I'd argue you can put the easy-to-read format in your test code).

EDIT: To answer the question below: you only have to launch the test from a function with the attribute #[test]. If your input is in a file, that function will have to call code to read the file and do whatever test you must do with the data parsed from the file. It will be launched on every cargo test, so no problem there. If part of that code is a macro, declarative or procedural, it will be executed each time.

1 Like

This discusses the issue quite well. There's no good, standard answer here.

Everyone manages best they can with the default test harness because using anything else is fragile.

The most common solutions will pile lots of tests into the same test, which isn't as good an experience as you might be used to with other languages' test frameworks

5 Likes

Thank you all. I will read that, it looks like what I wanted to explore.

For now I have json file like this :

{
   "request" : "query...",
   "expect" : "expected value..."
}

But I think to extend this json file with some :

{
   "request" : "query...",
   "expect" : "expected value...",
   "contains" : { "data": "$.data.values.id", "value":8},
   "equals" : { "data": "$.data.values.id", "value":"1", "eq":4},
    "api_to_test" : ["api1", "api2", ...],
   "ignore" : "true", // optional
etc...
}

I will create some scenario in the file and according to the scenario I will launch some test. I think to use json_test create to do some stuff.

For each file, I want to generate a test like that :

#[tokio::test)]
#[ignore = "Ignored"]  // optional but to see that the test is ignored
async fn test_filename() {
   // Do stuff here according to data we have in scenario
}

After start to read documentation, I wonder if create a file.rs in which we write the test to pass and then after launch them can be more efficient ? And another solution ...?
We generate the file only if needed...

But in fact at the end, the generation will be just something like this :

async fn test_file() {
    TestFile::from(file).unwrap().run().await;
}

I think the rstest crate provides what you need?

1 Like

Thank you too. I read documentation, It is very interessing.

Hi,

So after reading a lot of documentation. I wanted to try to implement this kind of simple way to create MACRO and maybe make a benchmark just for curiosity...

This is the example I found :

macro_rules! make_testcase_add {
    ($value:expr, $testname:ident) => {
        #[test]
        fn $testname() {
            let value = $value;
            assert_eq!(value + value, 2 * value);
        }
    }
}

make_testcase_add!(1, test_add_1);
make_testcase_add!(2, test_add_2);
make_testcase_add!(3, test_add_3);

To illustrate, I start with this simple code just for example because I will need to use async function...

Code here :
(Rust Playground)

Then after I try to create and use my MACRO. But it seems that my call_ident value is not used ? And the async key word is not possible ?
So I wonder how we can generate async test ?

Code here :

I also wonder if the function will be called automatically in the tests right after it is generated?

You're trying to use run-time data, a proc_macro2::Ident, to pass to a macro. That's not how macros work. Macro expansion happens at compile time before any of your code is run. The make_test_feature macro is being given the identifier call_ident, not the identifier case1.

proc_macro2 is only useful in proc-macro code (which is run when the rest of your code is being parsed) and in other code generators that produce Rust source code. It is not useful inside of the crate where the macro should expand.

That error is a side effect of trying to define a test function inside another test function, which is not supported — note the other diagnostic you received:

warning: cannot test inner items

Because this doesn't work, #[tokio::test] also doesn't try to work in that context, so you got a weird error. In general, misused macros can produce weird errors because they expand to code that isn't valid in the context of the macro call.

1 Like

Damn... Thank you for the explanation.