A trait implementing problem

I have these codes:

trait RowIndex<T> {
    fn slice_index(&self, start: usize, end: usize) -> T;
}

impl<T> RowIndex<&[T]> for Vec<T> {
    fn slice_index(&self, start: usize, end: usize) -> &[T] {
        &self[start .. end]
    }
}

impl RowIndex<Vec<&[T]>> for Vec<Vec<T> {
    fn slice_index(&self, start: usize, end: usize) -> Vec<&[T]> {
        self.iter().map(|x| &x[start .. end]).collect()
    }
}

impl RowIndex<Vec<&[T]>> for Vec&[T]> {
    fn slice_index(&self, start: usize, end: usize) -> Vec<&[T]> {
        self.iter().map(|x| &x[start .. end]).collect()
    }
}

I'm trying to define a trait RowIndex, which have a method slice_index that can get the row index from a vector. It should works like this:

let a_vec = vec![1, 2, 3, 4];
a_vec.slice_index(0, 2) -> &a_vec[0..2],

let a_vec_vec = vec![
    vec![1, 2, 3, 4], 
    vec![2, 3, 4, 5]
]
a_vec_vec.slice_index(0, 2) -> vec![
    a_vec_vec.iter().map(|x| &x[0..2]).collect()
]

let a_slice_vec = vec![
    &a_vec_vec[0][..],
    &a_vec_vec[1][..]
] 
a_slice_vec.slice_index(0, 2) -> vec![
    a_vec_vec.iter().map(|x| &x[0..2]).collect()
]

In summary:

T is Number, and T: From<u8>
 //  the type implementing        what the method slice_index get
1. Self: Vec<T>             ->    &[T]
2. Self: Vec<Vec<T>>        ->    Vec<&[T]>
3. Self: Vec<&[T]>          ->    Vec<&[T]> 

What should I do to write it down?

Since the return type borrows, you're going to need a GAT or you're going to need to put a lifetime on your trait, so that you can weave a lifetime from &self to the return type.

Let's set that aside for the moment though, and consider the generic implementations, and what concrete implementations end up being applicable (existing) for the example types.

// Implementations
impl<X> RowIndex<...> for Vec<X>      // returns &[X]
impl<Y> RowIndex<...> for Vec<Vec<Y>> // returns Vec<&[Y]>
impl<Z> RowIndex<...> for Vec<&[Z]>   // returns Vec<&[Z]>
  • Vec::<i32>::slice_index returns...
    • X = i32: it returns &[i32] (via impl #1)
    • i32 can't unify with Vec<_> so impl #2 does not apply
    • i32 can't unify with &_ so impl #3 does not apply
  • Vec::<Vec<i32>>::slice_index returns...
    • X = Vec<i32>: it returns &[Vec<i32>] (via impl #1)
    • Y = i32: It returns Vec<&[i32]> (via impl #2)
    • Vec<i32> can't unify with &_ so impl #3 does not apply
  • Vec::<&[i32]>::slice_index returns...
    • X = &[i32]: It returns &[&[i32]] (via impl #1)
    • &[i32] can't unify with Vec<_> so impl #2 does not apply
    • Z = i32: it returns Vec<&[i32]> (via impl #3)

So, unless you do something to prevent the implementations from overlapping, the method calls may be ambiguous for the latter two. If the type of the output is unambiguous in the context of the method call, this might not be a problem.


Anyway, back to the question: Here's one take on a GAT-based approach. Note how I had to handle the ambiguity in main.

Here's an iteration to remove the ambiguity. Vec<_> and &[_] don't implement Scalar so impl #1 can no longer apply to the nested types.

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.