Challenged by trait design, for seemingly simple relations

I have a project I'm working on called columnar that lays data out differently than Rust does. It works great, specifically for concrete types, but I'm struggling to put some structure on the traits in order to make it more useful in generic contexts.

Let's take as an example pairs (A, B). If you wanted to store many of these, you could use a Vec<(A, B)>. The thesis behind columnar is that you might actually prefer (Vec<A>, Vec<B>); that is, you might prefer to store the tuple elements in separate containers. There are a bunch of reasons, and columnar handles more than just pairs of things. The link above explains more, or you can suspend disbelief for the moment.

The crate design is based around several traits, but the problem distills down to just a few, which I've simplified past the point of being useful (in case you reached a conclusion about that by looking at them).

trait Index {
    type IndexRef;
    fn get(&self, index: usize) -> Self::IndexRef;
}

trait Borrow {
    type Borrowed<'a>;
}

trait Columnar {
    type Ref<'a>;
    type Container where for<'a> &'a Self::Container: Index<IndexRef = Self::Ref<'a>>, 
                         for<'a> <Self::Container as Borrow>::Borrowed<'a>: Index<IndexRef = Self::Ref<'a>>, 
}

The intent is that implementors of Columnar, for example (A, B), can name reference types, like (&'a A, &'a B), as well as a container that can provide these references. The container type, say (Vec<A>, Vec<B>), also comes in a read-only Borrowed<'a> form, for example (&'a [A], &'a [B]), that you might get from a non-(Vec<A>, Vec<B>) source. Both of the containers should implement Index with a reference type that matches Columnar::Ref<'a>. This last point is the sticking point: we want this so that folks have any structure at all about the reference types, short of having concrete implementors in hand, and the ability to see what the concrete reference types are.

At this point, I should say that the code above does not work. Rust does not like the second for<'a> constraint, because it does not see a 'a in the type on the left hand side of the : constraint. I don't wholly understand why that is, but smart people I know believe that it makes some sense. Informally, the explanation was that the Borrowed<'a> types may not actually depend on 'a, in contrast to how an &'a type certainly does depend on 'a.

So, I'm hoping to brainstorm a bit on alternate ways to describe what feels like a not-wildly-complicated relationship between (A, B), (Vec<A>, Vec<B>), (&'a [A], &'a [B]), and (&'a A, &'a B). I have types, owning container types that can be written at, and a class of lifetimed read-only container types that can be read from. I'd love to communicate that when reading from them, you get a consistent type back.

I've gone through a few alternate designs, though they seem to have their own limitations. For example, IndexRef is intentionally not a GAT because it wants to reflect the lifetime of its implementor, rather than to vary with the lifetime of &self when get is called. It seems like adding a lifetime to Index, i.e. a trait Index<'a> might work, but scares me that one will only be able to combine identically lifetimed containers (and GATs are invariant, making aligning the lifetimes challenging).

Concrete questions include:

  1. Are there better patterns for expressing the constraints among the various types, containers, and references?
  2. Is that second constraint fundamentally problematic, or only problematic to Rust's current solver?

I'm happy to unpack more about the design and constraints, if folks would find that helpful. There are other usability / extensibility constraints that might rule out some solutions that would otherwise work.

I didn't check your project yet, but this sounds like similar to "structure-of-arrays". there's a (sophiscated) crate for this kind of usage, and this crate uses derive macros to simplify the user implementation. this is the "core" trait of it, however, your trait seems way more complex though:

imagine in the case:

impl Borrow for SomeContainer {
    type Borrowed<'_> = Foo;
}

then the constraints essentially became:

for <'a> Foo: Index<IndexRef = Self::Ref<'a>>`;

now you can see why this constraint is impossible to satisfy:

impl Index for Foo {
    type IndexRef = <SomeType as Solumnar>::Ref<'???>;
}

when you implement the trait Index for the single type Foo, you have only single associated type IndexRef, but the constraint needs a family of infinite types for<'a> Self::Ref<'a>

Right, I can see that some implementations would not satisfy the trait. And that seems great; I wouldn't want those implementations to satisfy the trait, just like I wouldn't want an implementation that does not satisfy any of the other expressed constraints.

What surprised me with this reasoning is that the existence of potential implementations that cannot satisfy the constraints seem to make the constraints invalid. This isn't a problem for most constraints we express; although String is not Copy; we can still require T: Copy.

My sense is that the reasoning for why the constraint is rejected ends up being "in marshaling the constraints before heading to the solver, we need to put them in a certain shape" and these constraints are not that shape. At least, I haven't yet grokked the argument you've made (a co-worker made it also, and we agreed that my non-grokking was justified).

This issue I think. Oh hey, I've been here before! Let me see if a helper trait works. I think this does it. But I haven't tested it against actual implementations to see if it holds up.

That's also as far as I got in your post (I haven't done any brainstorming on alternate designs).

(Still talking about the original design.)

A couple other things occurred to me that I guess I'll mention:

trait Index {
    type IndexRef;
    fn get(&self, index: usize) -> Self::IndexRef;
}

// ...
where
    for<'a> &'a Self::Container: Index

This does seem like an odd combination as get takes a &&Container. But perhaps there's a reason for it.


The other thing is that when you have an unconstrained GAT:

trait Borrow {
    type Borrowed<'a>;
}

It has to be defined for all lifetimes. So if you try to do something like this:

impl<T> Borrow for Vec<T> {
    type Borrowed<'a> = &'a [T];
}

you have to impose T: 'static. So with the current design, you'll have 'static requirements in various places.

If you don't want that, you might attempt something like:

trait Borrow {
    type Borrowed<'a> where Self: 'a;
}

And that will let you make some progress... but you'll hit a wall once you need higher-ranked bounds.

So depending on your goals, you might want to consider switching to the alternative to GATs presented in that article.

Some of your HRTB problems might go away, or the workarounds become easier, with the alternative design.

Thanks! The helper trait may make sense; and I'll poke at that today and report back.

This does seem like an odd combination as get takes a &&Container. But perhaps there's a reason for it.

There are other implementors than &Self::Container, is the reason. To return to the example from the OP, (&[A], &[B]) would also implement Index, and .. while it happens to be Copy you could imagine some other borrowed containers that do not, and would want to be passed as &self. Potentially a Vec<&[A]>, which I just made up but don't mean to rule out.

It has to be defined for all lifetimes.

I haven't found this yet. I have run in to the issue elsewhere, but it seems to crop up mainly when imposing a constraint on the GAT, e.g. for<'a> Self::Borrowed<'a>: Debug or the like, which seems to be the moment where things go sideways. But I'd love to avoid those moments also, and I'll take a look at the alternate design you mention!

Edit: should say that I do have Self: 'a in various places (four) but only for an unmentioned trait. For whatever reason, the 'static limitation for GATs is not a problem yet in this case (perhaps it would become so attempting some of the workarounds, though).

Thank you very much!

I tried a few things out, and found success by migrating the where constraints into the traits they constrain. For example, having Borrow become Borrow<C: Columnar>, where C::Ref<'a> shows up as a constraint in the definition of the GAT Borrowed<'a>.

I guess tl;dr, rather than write the general structure of the types and then constrain them with where drive the constraints in the same way you might program: determine which types need to be "inputs" to other types, and move those inputs into trait bounds that constrain the other type definitions.

A lot of reworking, but seems to work fine at the moment!

trait Index {
    type IndexRef;
    fn get(&self, index: usize) -> Self::IndexRef;
}

trait Borrow<C: Columnar + ?Sized> {
    type Borrowed<'a>: Index<IndexRef = C::Ref<'a>>;
}

trait Columnar {
    type Ref<'a>;
    type Container: Borrow<Self> where for<'a> &'a Self::Container: Index<IndexRef = Self::Ref<'a>>;
}