Hello all.
I'm trying to get a downsampled chunk from the cycled array, i.e. form the new array of the same length n
, taking only every k
-th element. Now, I'm doing it this way:
fn downsample(arr: Vec<i32>, n: usize, k: usize) -> Vec<i32> {
arr.iter().cycle().step_by(k).take(n).collect()
}
It works, but when k
is increasing, this example becomes very slow. I've tried it with n = 4800
and k = 100, 200, 300, 400, 500
and got the following resuts:
test sound::test::array_iter_100 ... bench: 283,439 ns/iter (+/- 32,522)
test sound::test::array_iter_200 ... bench: 560,651 ns/iter (+/- 117,210)
test sound::test::array_iter_300 ... bench: 841,591 ns/iter (+/- 115,144)
test sound::test::array_iter_400 ... bench: 1,123,007 ns/iter (+/- 188,592)
test sound::test::array_iter_500 ... bench: 1,377,115 ns/iter (+/- 289,280)
For my purposes, it's too slow, and, what's more important, too unpredictable. Is there any better method?
EDIT: seems that to simply change from iterators approach to direct imperative code did the trick:
fn downsample(arr: Vec<i32>, n: usize, k: usize) -> Vec<i32> {
let mut pos = 0;
let mut res = Vec::with_capacity(n);
for _ in 0..n {
res.push(arr[pos]);
pos = (pos + k) % n;
}
res
}
test sound::test::array_iter_100 ... bench: 156,988 ns/iter (+/- 5,676)
test sound::test::array_iter_200 ... bench: 157,745 ns/iter (+/- 6,255)
test sound::test::array_iter_300 ... bench: 156,960 ns/iter (+/- 3,378)
test sound::test::array_iter_400 ... bench: 157,331 ns/iter (+/- 3,098)
test sound::test::array_iter_500 ... bench: 157,004 ns/iter (+/- 2,356)
Anyway, if there's something more I should know for this case - feel free to tell.