Is Iterator::take(n) O(1) regardless of n?

Looking at the code: iterator.rs - source
the superficial answer seems to be "yes", but I'm not sure.

My concern is that take(n) iterates through n elements then retains it for future use. In which case it would be O(n).

However, in this particular case, it appears we just create a struct that keeps track of n, so this is O(1) ?

1 Like

Yes, that's correct.

Note that Take is defined in libcore, which doesn't have access to heap allocation, so it would not really be practical for it to eagerly iterate and retain an arbitrary number of elements.

2 Likes

It may be obvious, but take note of the fact that iterators are lazy and this can stymie naive big-O analysis if you only consider a single function in isolation.

For example, Iterator::skip is also O(1) - it does not call next on the inner iterator. But the first call to next on a Skip may be O(n) as the implementation calls next a bunch of times and discards all the intermediate results. But after that calling next on the same Skip is O(1) (that is, assuming the inner iterator's next is O(1) as well).

More subtly, calling take(n) on an iterator is O(1). But each call to next() on a Take has to check whether it has yet yielded n elements, unless LLVM can optimize it away. So the effect of calling take in your algorithm may be O(n), even though take itself is not.

tl;dr: analyze algorithms in context

7 Likes

Just to be pedantic, because this is my first time worknig through this -- the 'effect' is that the first n calls to .next() each incur an additional O(1) cost (the check against the take) right?

Correct.

If you call take, you probably need the check anyway, and even when you don't the optimizer can often eliminate it. So it's rarely a major concern.

(Also, this effect can't move an algorithm into a higher complexity class overall, since adding O(n) to O(n) still gives O(n).)

To further illustrate this: Imagine calling take(n) itself n times. (So you get one significantly nested Take<Take<…<Take<OriginalIterator>>…>> type out of it) and then iterating through this whole iterator.

This whole operation will then will cost O(n²). Maybe it’s even more clear if we use a boxed trait object because then the amount of times take is called can depend on an argument as well.

use std::{iter, mem};

fn apply_take<'item, Item: 'item>(n: usize, iter: &mut Box<dyn Iterator<Item = Item> + 'item>) {
    let dummy = Box::new(iter::empty());
    let iter_value = mem::replace(iter, dummy);
    let new_iter_value = Box::new(iter_value.take(n));
    *iter = new_iter_value
}

fn demonstration(n: usize) {
    // iterator of length n
    let mut iter: Box<dyn Iterator<Item = _> + '_> = Box::new(iter::repeat(42).take(n));
    
    // apply take(n) n times
    for _ in 0..n {
        apply_take(n, &mut iter);
    }
    
    // iterate through it:
    for _ in iter {}
}

fn main() {
    demonstration(42);
}

The iterpretation of where the O(n²) comes from can be done in two ways. Either you say that calling take needs O(1), but it also increases the cost of all .next() calls on the resulting iterator, then the calculation is n·O(1) = n·O(n) for the take calls, but this also brings up the cost for next to O(n), which is called n times itself, so n·O(n) = O(n²) for the final loop. Together O(n) + O(n²) = O(n²).

Or you say that take itself is lazy but it’s responsible for the entire O(n) overhead that comes from all the n possible calls to .next() that might follow, so the next() calls stay at O(1) each, and the extra time spend in practice is only due to parts of the take operation being delayed until the next calls trigger them. Then the calculation would be O(n²) for all the take calls and only O(n) for the loop with its next() calls, O(n²) + O(n) = O(n²).

Either approach for analyzing has advantages and disadvantages. The first approach more closely models the actual time that each call takes at the time it is called, but you carry around a lot of “state” in your analysis because of how calling apply_take influences the time of subsequent next calls in a
way that doesn’t depend on simple things like “the length of the iterator” anymore.

This is the advantage of the second approach: Simplify what things you need to take track of. It’s a kind of over-estimation to say take on an iterator of length n takes O(n + 1) because that’s only true if the resulting iterator is fully evaluated, i.e. iterated over. Otherwise you’ve accounted to time overhead that never happened. This kind of bookkeeping that assigns runtime costs of future next calls to earlier take call, is part of is a more general framework, called amortized analysis
which is particularly common around instances of “lazy evaluation”. It’s a simple form of that, following the approach of “pretend everything is eagerly evaluated”. In this case – for Rust – it’s kind of like “pretend you added an extra .collect::<Vec<_>>().into_iter() at every stage”, though this analogy doesn’t work for all iterator operations, especially the ones that are cheaper than O(n), like e.g. the method Iteratpr::skip.

My perspective comes from experience with lazy functional programming in Haskell, where the type that’s (somewhat) analogous to Rust’s Box<dyn Iterator> is the standard “list” type which also features a take function that every Haskeller would consider having a complexity of O(n) where n is the length of the resulting list. Actually, in terms of parameters, calling take on an iterator/list of length l with an argument k results in a result of length min(n, k), so the time would be O(min(n, k) + 1). You can see this kind of interpretation on other functions on lists in Haskell’s documentation, e.g. the map function which claims O(n) [implicitly assuming that n is the length of the list and that the function being applied takes O(1) time].

This is a kind of thing even more common for more complicated data structures, e.g. Map where you’ll find a Big-Oh time on every single function, something that I’m often feeling like is badly missing in Rust’s standard library or many crates; by the way those docs also clarify

The amortized running time is given for each operation, with n referring to the number of entries in the map.
(^^^ emphasis mine)

in the beginning. Maybe not a surprise that one of the crates that does add this kind of information is the im crate that provides immutable data structures. E.g. look at the docs of im::HashMap. See also its doc’s Performance Notes section talking about which of these operations may be amortized and how that’s indicated.

3 Likes

By the way, another great Rust example of amortization which I should probably mention is Vec, particularly Vec::push.

Vec does not guarantee any particular growth strategy when reallocating when full, nor when reserve is called. The current strategy is basic and it may prove desirable to use a non-constant growth factor. Whatever strategy is used will of course guarantee O (1) amortized push .
(source)

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.