Rc<Vec<T>> vs ImVec<T>

What are situations where Rc<Vec<T>> works better than Immutable::Vector<T> ?

If we're not modifying, copying either is O(1).

If we are modifying, ImVec is log(n), whereas the Rc<Vec<T>> is O(1) after an O(n) copy.

In practice are there situations where Rc<Vec<T>> makes more sense than ImVec<T> ?

EDIT: (after reading @OptimisticPeach 's comment): This is my fault, I should have been clearer. By ImVec, I'm referring to https://docs.rs/im/14.1.0/im/vector/enum.Vector.html

1 Like

As far as I get it... ImVec sounds like

struct ImVec<T> {
    mine: Vec<T>,
    rest: Rc<Vec<T>>,
    orderings: Vec<(usize, usize)> //what's mine and what's theirs

So, if the implementation of your ImVec is like that and you're not modifying, then there's really no point to having it. I'm sure there's a more correct way to write this (this can only modify a preexisting Rc<Vec<T>> and it can't update another ImVec<T>). If you're writing as well and you've got an O(log n) situation vs O(1) & O(n) situation then I'd say it depends on what the specific case is; if you're cloning frequently then Rc makes sense, otherwise ImVec.

Yes. One is when you never need to modify the vector after first creating it. Another is when the operations performed on it are O(n) large anyway.


Sorry, see edit above. By ImVec, I'm referring to https://docs.rs/im/14.1.0/im/vector/enum.Vector.html

1 Like

Is the advantage here being that Rc::new(...) moves the heap alocated Vec, and is O(1), while ImVec::from(...) has to build a new vec, and is O(n) ?

Iteration over im::Vector<T> is amortized O(1), while iteration over Vec<T> is O(1) (without amortization costs). Random lookups are O(log(n)) vs O(1) over the same types.

Rc<Vec<T>> allows immutably sharing Vec<T>, and should generally perform better.

edit: To clarify, iterating either data structure is O(n); I was referring specifically to the time complexity of the individual Iter::next() implementations.

I was going to make the counter argument that assuming the interior nodes are "32 wide", for a vec with 1B elements, log(n) is only ~6.

But then I realized: for a plain Vec, to access v[i], we can jump directly to where the element is located. However, with ImVec[i], we may have to to 6 "sequential, randomish memory lookups" to figure out where to jump to -- and this this can be quite a slowdown.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.