Where to run a non-time-related benchmark?


#1

Hi all :blush:

I’m thinking of adding a popular benchmarking technique for Search algorithms (A*, Dijkstra, IDA*) to the pathfinder crate. This technique consists on counting how many nodes a particular algorithm opens. The less nodes an algorithms opens, the faster it tends to be in general.

For example, if you count the nodes opened in A* v/s Dijkstra (assuming your heuristic is admissible in A*) you’ll find that A* always opens less nodes. Similarly, when comparing different heuristics for A*, you’ll find that between two admissible heuristics h1 and h2 such that h1 <= h2, the opened_node_count for A* will be always less when using h2 than when using h1. And these differences tend to show in the time of execution as well.

Not always however, because of cache and usage of heap v/s the stack in different algorithms, but in general it’s a good guiding measurement.

Anyway…

I want to build such a thing, but I have no clue as to where should I put it. Is there a way to run a function only when calling cargo bench but not measure timing at all? Instead, I’d like to show the results of the different tests for each algorithm. How should I do that?

Thanks y’all. Have a good Sunday! :sparkling_heart:


#2

Hi! :blush:

Hmm, I thought of making it a feature in your crate. Add in Cargo.toml:

[features]
open-nodes-bench = []

The name for the feature is arbitrary. You can choose your own!

Then, create a main.rs file:

#[cfg(feature = "open-nodes-bench")]
fn main() {
    let astar_nodes = ...;
    let dijkstra_nodes = ...;

    println!("A*:\t{}", astar_nodes);
    println!("Dijkstra:\t{}", dijkstra_nodes);
}

And then run your crate with the feature enabled:

cargo run --features open-nodes-bench

Maybe it would be sufficient for your case?


#3

You could write it as a program in examples/, then cargo run --example foo, or in tests/ configured with harness = false if you want cargo test to run it.


#4

Oh, I’ve occasionally seen this. Can you please explain what this does?


#5

I think it would! Thank you :sparkling_heart:


#6

Ohh, that’s interesting. Does harness=false make the tests not require to return a bool? Or does it let you print to stdout without capturing it? :slight_smile:


#7

That said, writing an example as @cuviper suggested may be a better idea! In this case, in addition to possibility for you to get required information, users of your code may see a practical, well, example :smile: how your API look like and how to use it.

Here you may find how to do it:


#8

The harness flag is in the config here: https://doc.rust-lang.org/cargo/reference/manifest.html#configuring-a-target

The gist is that instead of using #[test] or #[bench], you’d have a normal fn main() in the test file which does whatever you want. The test program will run along with all other tests in your crate, whereas examples are only built, not run unless you run them explicitly.


#9

Ohh, that’s so nice :smiley:

I assume that when making your own harness, you can print without having the output captured by cargo test? Maybe that’s obvious, but I’m not sure what a test harness is so I have to ask :sweat_smile: :slightly_smiling_face:


#10

The harness is basically the libtest runtime that normally runs your tests and benchmarks, and yes that’s what normally what captures the output. Without the harness, you can print whatever you like in the open.


#11

Sweet! :sparkling_heart: