I'm implementing some code that will set_len on a Vec, b/c it wants to copy data into that Vec, and so why initialize twice (it literally does make a difference in performance in my use-case). I want to be as careful as possible, so it seems like it would be bad to allow this for element-types with pointers (b/c the Rust code might attempt to destruct the elements before copying in new data ?
So it seems like I'd want to mark the element type as having a trait that means "no pointers in representation". And it would seem like Copy is such a trait, since you can't naively copy a type with pointers. But I'm not quite sure, b/c of borrowed-pointers, so I figured I'd better ask. I did search, but found nothing responsive to my question, which .... well I guess it's a little esoteric.
You're only allowed to use set_len to increase the length of the vec after you've written you data into it (see the safety conditions in its docs), typically through getting a raw pointer with .as_mut_ptr(), then using pointer arithmetic and e. g. ptr::write calls to write the data and initialize all the elements.
Since ptr::write doesn't call any destructors anyways, the kind of destinction you're trying to make here is not really necessary anymore.
I've got to wonder about your use-case though. In case you write linearly to it, the Vec itself already keeps an uninitialized excess capacity next to the initialized contents; there's no need to "initialize twice" in the first place. Unless, of course, you need to write the contents out-of-order, then you'd need to pre-initialize if you want to use safe code.
In recent Rust versions, Vec::spare_capacity_mut() is also useful for this. It allows you to write to the uninitialized memory without assuming anything about its previous contents, then call set_len() once every element is written to. To quote the example from its documentation:
// Allocate vector big enough for 10 elements.
let mut v = Vec::with_capacity(10);
// Fill in the first 3 elements.
let uninit = v.spare_capacity_mut();
// Mark the first 3 elements of the vector as being initialized.
assert_eq!(&v, &[0, 1, 2]);
The other replies here give good answers for what you probably want to do, but I wanted to explicitly answer the question in the topic, since shared (but not exclusive!) references being Copy is important.
My use-case: I need to concatenate thousands of vectors, each in the size range of 1/4-1 gigabyte. Yes, the final vector is indeed a goodly fraction of a terabyte of memory, and I construct more than one of them. Indeed, the inner loop of my problem contains a linear combination of two of these matrices, so I need to go attack that problem next. Concatenating serially is slower than doing so in parallel. These vectors are either Complex64 or u64. But I figure, this code (parallel concatenation) might be more-generally useful, so I figured I'd try to figure out what the type/trait-constraint should be for the element-type.
a. sure, I could initialize it before I concat these vectors into it. But that initialization would itself be serial, and hence limited by the memory bandwidth of a single core. That's what I'm trying to avoid: being limited to a single core.
b. and I really do need to be able to do this concat fully-in-parallel. It's a massive difference in performance on a 40-way SMP.
c. the code exists and works. I should be able to open-source it soon, though it's no biggie, complexity-wise -- I mean, I'm not claiming it's something marvelous. It just solves my problem.
d. Another way of putting it is: I'm looking for a way to constrain a type so that its destructor is empty -- no code. That's enough for my needs.
More details on the use-case.
this is in order to build sparse matrices (for sprs)
imagine 2^24 x 2^24 Complex64 sparse matrices, with 10-30k nonzero entries per row. Doing rough math, that comes out to about 32 bytes per entry, let's say 32k entries per row. So 2^20 bytes per row. That's an overestimate, but not by more than a factor of two.
I can build these rows in parallel, and I do. B/c too-fine-grained parallelism is bad news too, I do it in chunks of 2^10 rows, so you have 2^14 chunks, and each is about a gigabyte.
On the machines I'm using, they can copy that in a little under a half-second, on a single core. So to assemble the final sparse-matrix, I need to concat 2^24 arrays of Complex64, and likewise of u64.
So if first compute the length each constituent array, and its final offset in the final array, I can construct that final array, set_len to its final size, cut it apart into mutable slices, and then in parallel do copy_from_slice into those mutable slices.
If you're looking for a "no destructor" bound, I'm not sure why you brought up pointers - heap allocations maybe? Anyway, a Copy bound does accomplish it, but that's a crude approximation. As has been covered above, you really want to work with explicitly unitialized types. MaybeUninit doesn't destruct.
Oh yes, now I see. Sorry I just got back from the gym and so my brain wasn't working fully -- I just wrote up my use-case and then settled in to dinner. But I see now what you're suggesting, and will try it.