I've just released the next New Rustacean episode, discussing testing and benchmarking (though as of the moment I post this, the episode is still waiting on Travis to publish ). Along the way, I set up a trivial benchmark just to show how things work, and, well, the results are interesting to me.
#![feature(test)]
extern crate test;
#[cfg(test)]
mod tests {
use test::Bencher;
use std::time::Duration;
use std::thread::sleep;
#[bench]
fn demonstrate_sleep_benchmark(b: &mut Bencher) {
let d = Duration::new(0, 10); // 10 ns sleep
b.iter(|| sleep(d));
}
}
When I run this:
$ cargo bench
Compiling bencher_test v0.1.0 (file:///Users/chris/Desktop/bencher_test)
Running target/release/bencher_test-c7693b5e150eea17
running 1 test
test demonstrate_sleep_benchmark ... bench: 23,151 ns/iter (+/- 6,527)
test result: ok. 0 passed; 0 failed; 0 ignored; 1 measured
This is interesting: I would not expect there to be that much overhead involved in calling sleep
! The sheer amount of variation involved there surprises me, too.
Calling Bencher::iter
on a trivial function like add(a: i32, b: i32)
shows the 0 ns (+/- 0)
I expect for something like that. Obviously there's going to be some overhead in hitting the system here; the thing that made me curious is: that's a lot of time beyond the specified duration.
What exactly is going on here?