How to benchmark with criterion using random inputs and not count the random data generating time?

Maybe it would be good to generate the random data in advance and then just slicing it at benchmark?

If you want different random data for each individual run (iteration) of the benchmark, then have a look at iter_batched: it takes a setup closure where you can generate new input for the routine closure to be benchmarked.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.