Downsampling of the cycled array

Hello all.

I'm trying to get a downsampled chunk from the cycled array, i.e. form the new array of the same length n, taking only every k-th element. Now, I'm doing it this way:

fn downsample(arr: Vec<i32>, n: usize, k: usize) -> Vec<i32> {
    arr.iter().cycle().step_by(k).take(n).collect()
}

It works, but when k is increasing, this example becomes very slow. I've tried it with n = 4800 and k = 100, 200, 300, 400, 500 and got the following resuts:

test sound::test::array_iter_100 ... bench:     283,439 ns/iter (+/- 32,522)
test sound::test::array_iter_200 ... bench:     560,651 ns/iter (+/- 117,210)
test sound::test::array_iter_300 ... bench:     841,591 ns/iter (+/- 115,144)
test sound::test::array_iter_400 ... bench:   1,123,007 ns/iter (+/- 188,592)
test sound::test::array_iter_500 ... bench:   1,377,115 ns/iter (+/- 289,280)

For my purposes, it's too slow, and, what's more important, too unpredictable. Is there any better method?


EDIT: seems that to simply change from iterators approach to direct imperative code did the trick:

fn downsample(arr: Vec<i32>, n: usize, k: usize) -> Vec<i32> {
    let mut pos = 0;
    let mut res = Vec::with_capacity(n);
    for _ in 0..n {
        res.push(arr[pos]);
        pos = (pos + k) % n;
    }
    res
}
test sound::test::array_iter_100 ... bench:     156,988 ns/iter (+/- 5,676)
test sound::test::array_iter_200 ... bench:     157,745 ns/iter (+/- 6,255)
test sound::test::array_iter_300 ... bench:     156,960 ns/iter (+/- 3,378)
test sound::test::array_iter_400 ... bench:     157,331 ns/iter (+/- 3,098)
test sound::test::array_iter_500 ... bench:     157,004 ns/iter (+/- 2,356)

Anyway, if there's something more I should know for this case - feel free to tell.

Do you need to collect into an Vec, if are you just going to iterate over these values? If you are just going to iterate over the values the iterator way should be faster.

No, I'm collecting (fixed the second example). In fact, this Vec will be then converted into JavaScript TypedArray.

You could pre allocate the array and then use the extend method on Vec to fill it up.

let mut vec = Vec::with_capacity(n);
vec.extend(arr.iter().cycle().step_by(k).take(n));

The performance degradation in your iterator version is probably caused by lots of reallocations.

Tried your version, here's the result:

test sound::test::bench::array_iter_100 ... bench:     370,952 ns/iter (+/- 36,865)
test sound::test::bench::array_iter_200 ... bench:     723,748 ns/iter (+/- 171,478)
test sound::test::bench::array_iter_300 ... bench:   1,116,621 ns/iter (+/- 148,899)
test sound::test::bench::array_iter_400 ... bench:   1,482,734 ns/iter (+/- 80,059)
test sound::test::bench::array_iter_500 ... bench:   1,864,610 ns/iter (+/- 154,278)

Even worse then directly collecting. And for me it seems to be logical, because degradation starts not with size of result, but with step size (result size is the same in any case). I think it has something to do with the fact that it has to iterate the array many times in a row?

Ah, I didn't notice that the result size was constant. It may just be that it fails to optimize well then. Specifically the step_by.