Performance difference between iterator and for loop

Is there any performance difference between

let new_vec = vec.iter().map(my_func).collect::<Vec<_>>();


let mut new_vec = vec![];
for el in vec {


First note that for el in vec will call .into_iter(), not .iter(), so these pieces of code do not do the same thing if vec is also a Vec<_>.

That said, collect::<Vec<_>>() will use the size_hint of the iterator to preallocate the buffer for the Vec. If vec is a vector (or a slice, or a range, or any other type that has a precise size_hint), the iterator version may be faster because it does not have to reallocate several times.

You can do the same thing in the iterative version:

let mut new_vec = Vec::with_capacity(vec.len()) { // supposing vec is actually a slice
    for el in vec {

This is equivalent to the version that uses collect.

Even so, there may still be a measurable difference in performance for non-optimized builds. The usual solution to this is to do your benchmarking with optimized builds :slight_smile:

1 Like

Huh, I thought iter() and into_iter() did the same thing. Is the only difference that iter() takes a reference and into_iter() takes an owned value?

1 Like

Sometimes they do. It's confusing because there are 3 different implementations of IntoIterator involving Vec. They are:

impl<T> IntoIterator for Vec<T>             // 1
impl<'a, T> IntoIterator for &'a Vec<T>     // 2
impl<'a, T> IntoIterator for &'a mut Vec<T> // 3

#1 is the one you're probably thinking of. It takes an owned value and:

Creates a consuming iterator, that is, one that moves each value out of the vector (from start to end). The vector cannot be used after calling this.

The other two are just wrappers around .iter() and .iter_mut().

impl<'a, T> IntoIterator for &'a Vec<T> {
    type Item = &'a T;
    type IntoIter = slice::Iter<'a, T>;

    fn into_iter(self) -> slice::Iter<'a, T> {


impl<'a, T> IntoIterator for &'a mut Vec<T> {
    type Item = &'a mut T;
    type IntoIter = slice::IterMut<'a, T>;

    fn into_iter(self) -> slice::IterMut<'a, T> {

These exist so that you can do the following:

fn main() {
    let mut vec = vec![1, 2, 3, 4, 5];
    for i in &vec { // iterate over just references
        // i == &i32
        // equivalent to `for i in vec.iter() {`
    for i in &mut vec { // iterate over mutable references
        // i == &mut i32
        // equivalent to `for i in vec.iter_mut() {`
    for i in vec { // iterate over the owned values
        // i == i32

Yeah, what I've been hearing people saying on this forum is that iterators can sometimes be faster, and they're never slower.

That they're never slower is certainly not true. It can often be optimized down to something faster, but sometimes it can't.

1 Like

The biggest disadvantage of iterators is the performance penalty in debug mode when calling iterator's native methods, i.e. no inlined function calls. That hurts especially when using a lot of them, e.g. when searching for something in an abstract syntax tree. I already had to deal with code not finishing execution in a reasonable timeframe in debug mode and had to give up on debug mode altogether.


Yeah. I've found myself having to add to Cargo.toml:

# Required for dev builds to be usable
opt-level = 2

Rust uses and talks a lot about "zero cost abstractions", but what it actually means is "zero runtime cost abstractions" (but that doesn't quite have the same ring to it).
So there is a cost and when you compile with optimizations turned on, you move that cost from run-time to compile time. That's a good thing, but it means that without optimizations you end up paying this cost entirely at runtime.

Relevant talk: CppCon 2019: Chandler Carruth “There Are No Zero-cost Abstractions” Link

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.