Thank you all for your input @jofas, @theemathas, @jumpnbrownweasel, and @quinedot.
To give some more context, the actual wrapper type combines backing storage with an indexing strategy. For example the type view: View<Vec<i32>, SquareSymmetric>
allows indexing a Vec<i32>
through view[[i, j]]
as if it were a square and symmetric matrix, only storing the unique elements.
It would be convenient if the wrapper can both borrow and own the backing storage data. Requiring a reference will not allow the wrapper to own the data and so unfortunately this does not work in my case.
This seems like it would allow borrowed and owned backing storage, but does require that the storage can be viewed as a slice. This is probably possible for most types of backing storage. However, if the data is lazily loaded it might not be able to provide a slice reference over all elements. I only actually care that the storage type allows accessing elements through some function, like Index
and IndexMut
. It functions more like a map from index to value I suppose.
Implementing Deref
would leak the API of the backing storage, which is undesirable. Is it possible to write trait bounds such that you're able to index everything that method resolution would find for T
inside of the generic implementation?
To reiterate on the information that was not provided in the original post:
- the backing storage type should only need be able to access to elements through their index
- the backing storage does not need to be able to provide a slice of all the elements; the elements need not be stored contiguously
- the API of the backing storage should not leak through the view (wrapper) type
- the index implementation of view should be able to transform the indices (even into a different type) before passing them to the backing storage
Note: It is entirely possible that my vision for this view type is non-sensical, and that I am creating something that does not need to exist. If you think what I am trying to do is better solved in another way, please let me know.
In the following example I introduced a wrapper called ThroughDeref<T>
which implements Index<X> where T: Deref<Target: Index<X>>
. The library consumer then has to wrap values of types that implement Index<X>
through Deref
in ThroughDeref(...)
. Ideally, I would like to avoid this.
use std::ops::{Deref, Index};
struct View<T>(T);
impl<D, X> Index<X> for View<D> where D: Index<X> {
type Output = D::Output;
fn index(&self, index: X) -> &Self::Output {
self.0.index(index)
}
}
struct ThroughDeref<T>(T);
impl<D, X> Index<X> for ThroughDeref<D> where D: Deref<Target: Index<X>> {
type Output = <D::Target as Index<X>>::Output;
fn index(&self, index: X) -> &Self::Output {
self.0.index(index)
}
}
fn test() {
let array = View([0i32]);
let first = array[0]; // Deref not implemented for [T; N].
let slice = View(ThroughDeref(&[0i32]));
let first = slice[0];
}
The next solution direction introduces a trait CustomIndex<X>
that we must implement for all possible backing storage types.
use std::ops::Index;
trait CustomIndex<X> {
type Output;
fn custom_index(&self, index: X) -> &Self::Output;
}
impl<T, const N: usize> CustomIndex<usize> for [T; N] {
type Output = T;
fn custom_index(&self, index: usize) -> &Self::Output {
&self[index]
}
}
impl<T, const N: usize> CustomIndex<usize> for &[T; N] {
type Output = T;
fn custom_index(&self, index: usize) -> &Self::Output {
&self[index]
}
}
// Implement CustomIndex for &[T], &Vec<T>, Vec<T>, any appropriate type in the ecosystem... :'(
struct View<T>(T);
impl<D, X> Index<X> for View<D> where D: CustomIndex<X> {
type Output = D::Output;
fn index(&self, index: X) -> &Self::Output {
self.0.custom_index(index)
}
}
fn test() {
let array = View([0i32]);
let first = array[0]; // Deref not implemented for [T; N].
let slice = View(&[0i32]);
let first = slice[0];
}
I find neither of the above solutions very satisfying.