vec is made with the assumpition that it should be allowed to reallocate and move it's members so you can avoid reallocatin but none of the api allows you to assume it's not being done
i think you will have to make a custom type and use unsafe for that.
probably something like
pub struct GrowOnly<const MaxCapacity:usize>{
storage:[MaybeUninit<UnsafeCell<T>>;MaxCapacity],
initialized: AtomicUsize,
}
imp<const MaxCapacity:usize> GrowOnly<MaxCapacity>{
pub unsafe fn insert(&self,value:T)//no other thread can be inserting otherwise we need a way more complex approach to track how far along initialization is{
//check initialized<MaxCapacity-1
//write at storage[initialized+1]
//increment initalized so that readers can access the new element
}
pub fn get_ready<'a>(&'a self)->&'a [T]{
let amount=initialized.load(Ordering::Relaxed)//worst case we load just before the insert stores and lose the most recent updates
let init_slice=storage[0..];
unsafe{ mem::transmute(init_slice)}//memory layout is the same and slice is initialized and not going to be mutated
}
}
Pinned_vec sounds like a good approach. The problem is that it would need a split_const_from_mut method that would not allow changing the immutable part while using the mutable part ?
When the index in your usage is only used to access elements and has no further meaning, such as ordering or indicating when an element was created, you can instead use a reference to the stored element and remove the entire vector, lock, and related code.
When you read the length n you need a synchronization guarantee that all elements in the range 0..n are visible to you.
For this, you need a Release in the writing thread and an Aquire in the reading thread as these only work in pairs. A Release paired with Relaxed does nothing.
Ordering::Release: When coupled with a store, all previous operations become ordered before any load of this value with Acquire (or stronger) ordering. In particular, all previous writes become visible to all threads that perform an Acquire (or stronger) load of this value.
Ordering::Aquire: When coupled with a load, if the loaded value was written by a store operation with Release (or stronger) ordering, then all subsequent operations become ordered after that store. In particular, all subsequent loads will see data written before the store.
I tried to implement the collection you described, but I didn't test it so IDK if it fully works or if the implementation is safe / sound.
I would be really glad if someone could give a review to this code.
But yeah, easier solution would be to implement such data structure yourself with a mutex for push operation.
Even better would be to remove Mutex from the data structure and have a "Reader" with deref into slice and "Writer" with fn push<T>(&mut self, value: T) that will store T and do atomic increment of the length (Release) that "Reader" will load (Acquire, or Relaxed but if cache previous length and use fence(Acquire) when changed). Then reader will construct the slice via unsafe code. This should be the only unsafe block in this data structure, without locks (well, "Writer" will be under a lock or some other mechanism of synchronization, but it's up to the user).