Is it possible to use next_chunk in a chain?

Consider the following code that populates a map with pairs of integers (first integer is the key, second integer is the value):

let values = [1, 2, 3, 4, 5].into_iter();
let map = HashMap::new();
while let Ok([first, second]) = values.next_chunk() {
    libraries.insert(first, second);

Is it possible to write using a chain? For example, I would like to write something like:

let map = [1, 2, 3, 4, 5]
   .map([x, y], (x, y))

This doesn't work, however, since next_chunk only returns the next chunk and not an iterator over chunks.

I think this might be the function that I am looking for: Add `Iterator::array_chunks` (take N+1) by WaffleLapkin · Pull Request #100026 · rust-lang/rust · GitHub

It seems to have been merged but not yet landed in a release.

next_chunk is also not stable.

In the meanwhile you could:

    let hm: HashMap<_, _> = [1, 2, 3, 4, 5]
        .filter_map(|slice| match slice {
            [x, y] => Some((x, y)),
            _ => unreachable!(),
1 Like

If I'm not mistaken, that might be a bit lighter-weight using regular map.

Or maybe you could use filter_map() to achieve a visibly non-panicking implementation:

        .filter_map(|slice| {
            let [a, b] = <[_; 2]>::try_from(slice).ok()?;
            Some((a, b))

Remember to use the nightly docs for unstable things:

Good point, that does eliminate the chance of a panic branch should an optimization be missed.

Note that, if you're worried about the optimization being missed, the panic branch is arguably better than the ? branch -- the compiler know that blocks that panic are #[cold], which LLVM uses to make the branches going to the panics be unlikely, and thus give better branch prediction and instruction cache usage for the non-panicking case.

Not to mention that the map version will have an accurate size_hint, which the filter_map will not, so avoiding the panicking path might make your performance worse by causing extra Vec reallocations later.

(But, realistically, it's not going to fail to optimize this. The whole point of chunks_exact is that it makes it obvious to LLVM how long the chunks are, and thus simplifies bounds checks and other checks like this.)


Something useful to know is that anything of the form "call this to get the next element" can be converted into an iterator using std::iter::from_fn().

let items = [1, 2, 3, 4, 5].into_iter();
let iter = std::iter::from_fn(|| items.next_chunk::<2>().ok());
1 Like