Is there any advantage of Vec<Box<T>> over Vec<T>?

As stated in the title: is there any advantage of Vec<Box> over Vec ?

Context: reading some code that uses Vec<Box<T>> whereas I would just use Vec<T>

1 Like

Sure, there are advantages. Especially if T is fairly large, and you need to move it around a lot, the Box<T> is cheaper to move. Or if T is an unsized type, e.g. Vec<Box<[Foo]>> or Vec<Box<str>> or Vec<Box<dyn Trait>>, then the Box is necessary because Vec doesn’t support those directly. On the other hand, Box has disadvantages: Creating or destructing it involves extra (de)allocation, using it needs to follow an additional indirection/pointer, and linearly accessing a Vec<T> can have advantages regarding cache locality compared to Vec<Box<T>> where subsequent boxes in the vector can point to very different points in memory.

If the code is publicly available open source, perhaps you could provide some actual context in the form of a link, maybe it’s easy to see why Vec<Box<…>> is used in that particular case.


In my opinion, any use of Vec<Box<T>> should always explain why the code is doing that with a comment unless T is a trait object or slice. For example, in Tokio there's a place where we do it because the values we are putting into the vector were already boxed.

/// Cores that have observed the shutdown signal
/// The core is **not** placed back in the worker to avoid it from being
/// stolen by a thread that was spawned as part of `block_in_place`.
#[allow(clippy::vec_box)] // we're moving an already-boxed value
shutdown_cores: Mutex<Vec<Box<Core>>>,



Good point. Here you go: ``Vec<Box>` in scryver-prolog

etc ...

1 Like

I don't understand this example. The logic seems to be "it is okay to Box again because it is already Box-ed". This seems to somehow imply double-boxing (which in my opinion is just 2 wasted levels of indirection) is somehow okay.

The boxed value is created and boxed when the runtime is started, but it doesn't go into that vector until the runtime is shutting down. Using Vec<Core> here would be more expensive than Vec<Box<Core>> because we would have to copy it out of the box its already in.

The reason it is in a box before going in the vector is because it gets moved around a lot.


Put another way, are you saying:

the function that manipulates this field gets argument of Box<T>, not arguments of type T; therefore it is cheaper for this field to store Vec<Box<T>> ?

Is this what you mean by "already boxed"

Yes, by already boxed, I mean that it had already been boxed when we got it.


This whole situation is such a weird and counter intuitive optimization to me.

The benefit of Vec<Box<T>> is that on move, we do a copy of sizeof(u64) instead of sizeof(T); but the downside compared to Vec<T> is that this is no longer contiguous in memory. So we save on the write but have bad cache locality on reads.

It makes sense because the values are only read from the vector once. It's used like this:

fn shutdown(&self, core: Box<Core>) {
    let mut cores = self.shutdown_cores.lock();

    if cores.len() != self.remotes.len() {


    for mut core in cores.drain(..) {
        // This destroys the `Core` object.

    // .. some other stuff


Each worker thread calls this method. The last worker thread to call it then performs the single-threaded phase of the runtime shutdown.


Maybe the authors felt a need for “consistently” boxing the Terms between the vec and other enum variants… maybe it’s just an unnecessary oversight and Vec<Term> would be better. I’m also finding instances of &Vec<…> method arguments in the same code-base, so I’m assuming not much thought might have gone into these decisions (and also, I assume clippy isn’t used in this project).

On the other hand, it’s also likely that the performance difference Vec<Box<Term>> vs. Vec<Term> – even if the latter is beneficial – is fairly insignificant.


Clippy actually includes a lint for boxing stuff in enum variants in order to avoid wasting much space when there's a single giant variant (which is unused most of the time) and several, more frequently-used, small variants. Based on this, I'd guess there's some sort of "common wisdom" about this problem circulating, and so people are inclined to box enum variants which are significantly larger than the others.


At a brief glance, that looks like a mistake to me.

The performance impact of this sort of thing can be quite significant if the rest of the code is well tuned. The project I'm working on has vanishingly few allocations on the hot path. (~1 allocation per 150 calls). Accidentally using Vec<Box<_>> instead of Vec<_> in each of those calls would devastate performance in my library. Deallocating large vecs-of-boxes can also be much slower than you expect.

But for code that already uses allocations everywhere, or code off the hot path, it might not matter much.

My advice is to see if you can throw together a quick benchmark. Then see if you can refactor the code to avoid the extra boxes and see how your benchmark results change.

FWIW, here is a list of usages of such thing:<Box<[A-Z]&regexp=true&case=true&filter[lang][0]=Rust

I agree that for non-?Sized pointees, and besides very specific optimizations such as @alice's example, in general it's an anti-pattern: the examples with enums are missing the fact that Vec itself is already a form of indirection.

Another way to see this is that a Vec<T> to which we don't add or remove elements, is equivalent to a Box<[T]>; so wondering about Vec<Box<T>> is similar to Box<[Box<T>]>: the double indirection ought to be more blatant there :slightly_smiling_face:


As a particular example of this, Result<(), Box<SomethingReallyBig>> is way better than Result<(), SomethingReallyBig>, assuming you almost never take the error path (or that it's not perf-critical when you do).

Being able to have the return value be just a pointer -- in practice, thanks to layout optimizations, though this one isn't guaranteed -- makes the function calls much nicer.


I could imagine this

is due to something like this

Since terms may appear already boxed in other enums of the vector, and maybe that's where they usually appear, it might make sense to keep them boxed in the vector (especially if they sometimes get repacked in those other variants. OTOH, it might still be cheaper to unbox them when adding to the vector - depending on what is being done with the vector later.

1 Like

As a followup to the original question:

Due to space savings, even for normal sized T, Vec<Option<Box<T>>> often makes more sense than Vec<Option<T>> right?

That isn't a net savings of space, since you now have the vector of pointers in addition to the T objects themselves on the heap (and probably also some additional overhead from allocator-internal bookkeeping and memory fragmentation).

It's not faster when you actually need to read the T values, because of the additional indirection.

The only case I can think where it might be faster is if you want to iterate over the vector without dereferencing the boxes; for example if you just want to check which ones are set to None.

1 Like

Suppose the Vec has length $n$.

Vec<Option<T>> is going to use space n * sizeo(Option<T>) >= n * sizeof(T)

Vec<Option<Box<T>>> is going to use space n * sizeof(Option<Box>) + k * sizeof(T) where k is number of non zero entries.

For large sizeof(T) and small k/n, this absolutely seems like a saving in space.

Am I miscalculating something ?


You're right. I was mistakenly only thinking of cases where the Vec was (mostly) full. When a lot of the slots are unused, then Vec<Option<Box<T>>> will use less space, as long as Option<T> is wider than a pointer.