[Solved] In place (128*N)-elem Vec<f32> into N-elem Vec<Vec<f32> of 128 size>

#1
  1. I have
x: Vec<f32>
x.len() == 128 * N
  1. I want to, in place, convert this into a Vec<Vec<f32>> i.e. get N vecs, each of which has 128 f32’s.

  2. Is it possible to do this in place?

Needlessly weak promises on vector capacity?
#2

No. They have completely different memory layouts, the second requires 128*3*sizeof::<usize>() bytes more space, and you can’t split ownership of allocations in Rust.

1 Like
#3

However, you can access a 1-dimensional array [f32, N_128] as a 2-dimensional array
[[f32, 128], N], as they do have the same memory layout for commensurate const (N, N_128).

1 Like
#4

You can also access a &[f64] (or a Vec<f64>) as a &[[f64; 128]]. I have a crate for this:

https://crates.io/crates/slice-of-array

use ::slice_of_array::prelude::*;

let vec = vec![0; 1024];
let nested: &[[i32; 128]] = vec.nest();
2 Likes
#5

You can O(1) do Vec<T> <=> Vec<[T; N]> by using Vec::from_raw_parts carefully.

1 Like
#6

Solved. Thanks for all the sugestions!

#7

I don’t think you can correctly update the capacity field, which could cause problems because rust uses sized deallocation (on deallocation, the allocator is told the size of the data as computed from the capacity)

#8

The requirement is that

layout must be the same layout that was used to allocated that block of memory,

https://doc.rust-lang.org/std/alloc/trait.GlobalAlloc.html#tymethod.dealloc

The size+align of a M-capacity Vec<[T; N]> is the same as an M*N-capacity Vec<T>, so it’s fine.

(transmuteing is definitely not fine – for multiple reasons – but from_raw_parts can be done soundly.)

#9

What if the capacity of the Vec<T> is not divisible by N?

#10

Then you panic ¯_(ツ)_/¯

#11

You make it sound like this is something that should never happen in a correctly-written program… but you have no control over what sizes the allocator decides to use! Even shrink_to_fit does not promise exact control over the capacity.

#12

Okay. Wait a second. It’s clearly possible to implement a function which shrinks the capacity to exactly equal the length:

Vec::from(vec.into_boxed_slice())

(and notice that this uses shrink_to_fit() as part of its implementation)

…sooooo I guess the documentation of shrink_to_fit is basically just trolling us?


note: I’ve taken this line of discussion to a new thread.

1 Like
#13

I don’t follow why you think shrink_to_fit docs are trolling us. Can you explain?

shrink_to_fit reallocs the storage, and you can get an allocation that’s larger than needed. But Alloc::dealloc doesn’t require the dealloc size to be exactly what was allocated - it requires that the Layout given to dealloc “fits” the allocation. AFAIK, this means the size portion must be in the [requested_size, usable_size] range.

#14

Other interesting alternatives here:

  • x.chunks(128) gives you a lazily computed iterator over iterators of len 128, i.e. close to the pattern you asked for.

  • Thus, you can move your Vec elemens into the required nested Vecs with something along these lines:

    let nested: Vec< Vec<f32> > =
      vec_of_f32s
        .chunks(128)
        .map(Iterator::collect)
        .collect()
    ;
    

    Obviously this is not what you asked for, but may be of interest for someone with somewhat similar needs (specially the lazy version)

1 Like