Flatten [<RwLock<Vec<T>>>; N] to Iterator<Item = &T>

I'm currently trying to get a flatten iterator out of a const size array of locked Vecs, to my surprise I was able to do so but with a catch.

#[derive(Clone, Debug)]
pub(crate) struct LockVec<T, L: RawRwLock> {
    data: Arc<RwLock<L, Vec<T>>>,
}

pub(crate) struct RawStorage<T, L: RawRwLock, const N: usize> {
    data: [LockVec<T, L>; N],
    len: AtomicUsize,
}

impl<T: std::fmt::Debug + Default, L: RawRwLock, const N: usize> RawStorage<T, L, N> {
    pub(crate) fn items<'a>(&'a self) -> IterableGuards<'a, T, L> {
        let guards = self.data.iter().map(|vec| vec.data.read()).collect();
        IterableGuards { guards }
    }
}
// To my surprise this works but I have to create a Vec of guards
pub(crate) struct IterableGuards<'a, T: 'static, L: RawRwLock> {
    guards: Vec<lock_api::RwLockReadGuard<'a, L, Vec<T>>>,
}

impl<'a, T: 'static, L: RawRwLock> HasIter<'a, T> for IterableGuards<'a, T, L> {
    fn iter(&'a self) -> Box<dyn Iterator<Item = &'a T> + 'a> {
        let iter = self.guards.iter().flat_map(|guard| guard.iter());
        Box::new(iter)
    }
}

code in detail here

Is there any way to rewrite IterableGuards to take a slice or an iterator for me to avoid having to get all the locks in a vec?

You can use an array instead.

You have to store the locks somewhere outside of the iterator you return, because the Iterator trait allows consuming (and dropping) the iterator while holding on to all the items. It could probably be made to create the locks lazily with enough work and interior mutability, but if the iterator is exhausted you will have created and stored all the locks anyway.

2 Likes

I suppose, the lazy part is what I'm after, how do I take the guards one at a time instead of all of them in advance.

One option is to return a guard object representing an item instead of a plain reference:

struct LockVecItemGuard<'a, T, L: RawRwLock> {
    guard: Arc<lock_api::RwLockReadGuard<'a, L, Vec<T>>>,
    idx: usize
}

impl<'a, T, L:RawRwLock> std::ops::Deref for LockVecItemGuard<'a, T, L> {
    type Target = T;
    fn deref(&self)->&T { &self.guard[self.idx] }
}

impl<T: std::fmt::Debug + Default, L: RawRwLock, const N: usize> RawStorage<T, L, N> {
    pub(crate) fn items<'a>(&'a self) -> impl 'a + Iterator<Item = impl 'a + Deref<Target=T>> {
        self.data.iter().flat_map(|l| {
            let guard = Arc::new(l.data.read());
            (0..guard.len()).map(move |idx| LockVecItemGuard { guard: guard.clone(), idx })
        })
    }
}

If you really need an iterator of &Ts, you can make the locking lazy, but will need to keep the guards until after the iteration is finished. @quinedot's solution can be modified to do this by storing an array of OnceCells:

impl<T: std::fmt::Debug + Default, L: RawRwLock, const N: usize> RawStorage<T, L, N> {
    pub(crate) fn items<'a>(&'a self) -> IterableGuards<'a, T, L, N> {
        let guards = std::array::from_fn(|_| OnceCell::new());
        IterableGuards { storage: self, guards }
    }
}
// To my surprise this works but I have to create a Vec of guards
pub(crate) struct IterableGuards<'a, T: 'static, L: RawRwLock, const N: usize> {
    storage: &'a RawStorage<T,L,N>,
    guards: [OnceCell<lock_api::RwLockReadGuard<'a, L, Vec<T>>>; N],
}

trait HasIter<'a, T: ?Sized> {
    fn iter(&'a self) -> Box<dyn Iterator<Item = &'a T> + 'a>;
}

impl<'a, T: 'static, L: RawRwLock, const N: usize> HasIter<'a, T> for IterableGuards<'a, T, L, N> {
    fn iter(&'a self) -> Box<dyn Iterator<Item = &'a T> + 'a> {
        let iter = self.guards
            .iter()
            .enumerate()
            .flat_map(|(i,guard)|
                guard.get_or_init(|| self.storage.data[i].data.read())
                     .iter()
            );
        Box::new(iter)
    }
}
1 Like

If there is no way to not allocate the array of cells then this is the best solution, I might add both iter() and iter_lock() as separate functions to mark which one takes the locks eagerly and which is lazy

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.