Where to run a non-time-related benchmark?

Hi all :blush:

I'm thinking of adding a popular benchmarking technique for Search algorithms (A*, Dijkstra, IDA*) to the pathfinder crate. This technique consists on counting how many nodes a particular algorithm opens. The less nodes an algorithms opens, the faster it tends to be in general.

For example, if you count the nodes opened in A* v/s Dijkstra (assuming your heuristic is admissible in A*) you'll find that A* always opens less nodes. Similarly, when comparing different heuristics for A*, you'll find that between two admissible heuristics h1 and h2 such that h1 <= h2, the opened_node_count for A* will be always less when using h2 than when using h1. And these differences tend to show in the time of execution as well.

Not always however, because of cache and usage of heap v/s the stack in different algorithms, but in general it's a good guiding measurement.

Anyway...

I want to build such a thing, but I have no clue as to where should I put it. Is there a way to run a function only when calling cargo bench but not measure timing at all? Instead, I'd like to show the results of the different tests for each algorithm. How should I do that?

Thanks y'all. Have a good Sunday! :sparkling_heart:

1 Like

Hi! :blush:

Hmm, I thought of making it a feature in your crate. Add in Cargo.toml:

[features]
open-nodes-bench = []

The name for the feature is arbitrary. You can choose your own!

Then, create a main.rs file:

#[cfg(feature = "open-nodes-bench")]
fn main() {
    let astar_nodes = ...;
    let dijkstra_nodes = ...;

    println!("A*:\t{}", astar_nodes);
    println!("Dijkstra:\t{}", dijkstra_nodes);
}

And then run your crate with the feature enabled:

cargo run --features open-nodes-bench

Maybe it would be sufficient for your case?

1 Like

You could write it as a program in examples/, then cargo run --example foo, or in tests/ configured with harness = false if you want cargo test to run it.

Oh, I've occasionally seen this. Can you please explain what this does?

I think it would! Thank you :sparkling_heart:

Ohh, that's interesting. Does harness=false make the tests not require to return a bool? Or does it let you print to stdout without capturing it? :slight_smile:

That said, writing an example as @cuviper suggested may be a better idea! In this case, in addition to possibility for you to get required information, users of your code may see a practical, well, example :smile: how your API look like and how to use it.

Here you may find how to do it:

1 Like

The harness flag is in the config here: The Manifest Format - The Cargo Book

The gist is that instead of using #[test] or #[bench], you'd have a normal fn main() in the test file which does whatever you want. The test program will run along with all other tests in your crate, whereas examples are only built, not run unless you run them explicitly.

2 Likes

Ohh, that's so nice :smiley:

I assume that when making your own harness, you can print without having the output captured by cargo test? Maybe that's obvious, but I'm not sure what a test harness is so I have to ask :sweat_smile: :slightly_smiling_face:

The harness is basically the libtest runtime that normally runs your tests and benchmarks, and yes that's what normally what captures the output. Without the harness, you can print whatever you like in the open.

1 Like

Sweet! :sparkling_heart: