I’m thinking of adding a popular benchmarking technique for Search algorithms (A*, Dijkstra, IDA*) to the
pathfinder crate. This technique consists on counting how many nodes a particular algorithm opens. The less nodes an algorithms opens, the faster it tends to be in general.
For example, if you count the nodes opened in A* v/s Dijkstra (assuming your heuristic is admissible in A*) you’ll find that A* always opens less nodes. Similarly, when comparing different heuristics for A*, you’ll find that between two admissible heuristics
h2 such that
h1 <= h2, the
opened_node_count for A* will be always less when using
h2 than when using
h1. And these differences tend to show in the time of execution as well.
Not always however, because of cache and usage of heap v/s the stack in different algorithms, but in general it’s a good guiding measurement.
I want to build such a thing, but I have no clue as to where should I put it. Is there a way to run a function only when calling
cargo bench but not measure timing at all? Instead, I’d like to show the results of the different tests for each algorithm. How should I do that?
Thanks y’all. Have a good Sunday!