Implement a trait with an associated type and different number of lifetimes

Hello, I have the following code:

struct Term {
    a: u32,
}

struct Frame<'a>(&'a u32);

impl Term {
    pub fn draw<F>(&mut self, f: F)
    where
        F: FnOnce(&mut Frame),
    {
        let mut frame = Frame(&self.a);
        f(&mut frame);
    }
}

struct Args<'a, 'b> {
    frame: &'b mut Frame<'a>
}

trait Drawer {
    type Input<'c>;
    
    fn draw<'s, 'c>(&'s self, c: Self::Input<'c>);
}

struct DrawerImpl;

impl Drawer for DrawerImpl {
    type Input<'c> = Args<'c, 'c>;
    
    fn draw<'s, 'c>(&'s self, _c: Self::Input<'c>) {
        
    }
}

fn main() {
    let mut t = Term { a: 1 };
    let d = DrawerImpl;

    t.draw(|frame| {
        let a = Args { frame };
        d.draw(a);
    });
}

Basically I'm abstracting a game over the rendering framework to implement a gui and a terminal version. The Drawer trait that has the Input associated type with a lifetime (since it was enough for the gui implementation), but for the terminal one the Args struct needs to have two lifetimes, as in the example. I'm unable to implement Drawer for DrawerImpl, since the compiler complains over the Args { frame } line saying error: lifetime may not live long enough. The solution probably is to have Input generic over two lifetimes, but this way I'm unable to implement the trait for the other framework, where I have a single lifetime parameter.

I'm quite confused, how am I supposed to resolve this problem?

I don’t quite see the reason why Input is an associated type, and GAT. Typically GATs with lifetimes have the strength that they can relate that lifetime parameter to lifetimes of other borrows appearing in the method signatures in the trait, which isn’t the case here. And associated types have strengths such as that you can use them in other struct definitions (sparing the need for extra parameters) and that they can help avoid ambiguities. Maybe your full code has instances of these kinds of things, but as it stands here, I’d say: just consider turning it into a parameter, which solves your problem immediately.

Alternative approaches exist to hide additional longer lifetimes behind some abstraction, but unfortunately only at the cost of adding trait objects. Trait objects dyn Trait + 'a can abstract over data types with multiple distinct lifetimes, as long as all lifetimes outlive 'a. So you could solve your original code, without modifying the actual code, like this.

My understanding was that associated types should be used when the associated type is strongly coupled with the implementation.

For example, when I want to implement Drawer, typically I know what the input is, I don't implement Drawer<T> for all T. But maybe I didn't really grasp the difference between associated types and generic parameters.

My first attempt in my full code was to use a generic parameter, but this involved tampering with lifetimes in all the structs that use the drawer, and it was quite messy, and it didn't really work. Unfortunately it's too much code to be pasted here though.

you can simply impl Drawer<Args> for DrawerImpl> {}. generic parameters doesn't mean the implementation must be all generic.

trait Drawer<Input> {
	fn draw(&self, input: &Input);
}
struct DrawerImpl;
impl<'a, 'b> Drawer<Args<'a, 'b>> for DrawerImpl {
	fn draw<'s, 'c>(&'s self, c: &Args<'a, 'b>) {
	}
}
// use it like this:
t.draw(|frame| {
	let a = Args {
		frame,
	};
	d.draw(&a);
});

I don't know what your actual code does, but the snippet you posted is over-using lifetimes. try to reduce intermediate "reference" wrapper types, like Args, and use elided lifetimes whenever possible. the- original example can be reduced to:

struct Term {
	a: u32,
}
impl Term {
	pub fn draw<F>(&mut self, f: F)
	where
		F: FnOnce(&mut Self),
	{
		f(self);
	}
}
trait Drawer<Input> {
	fn draw(&self, input: &mut Input);
}
qstruct DrawerImpl;
impl Drawer<Term> for DrawerImpl {
	fn draw(&self, input: &mut Term) {
	}
}
fn main() {
	let mut t = Term { a: 1, };
	let d = DrawerImpl;
	t.draw(|term| d.draw(term));
}

if you think a trait is like a "function" for types, then generic type parameters (Self is basically a special generic type parameter) are kind of like the "input" of the function, and associated types are kind of like the "output" of the function.

it's partially correct, but not completely: associated types are strongly coupled (in fact, determined by) the trait implementation, but if you think about it, generic parameters are strongly coupled too.

you can simply impl Drawer for DrawerImpl> {}. generic parameters doesn't mean the implementation must be all generic.

I know, but then I have to deal with an additional <T> wherever I want to use Drawer and I don't really care being generic over the input in the rest of the codebase, since in my case a specific implementation of drawer implies a specific args struct in any case.

Having to specify <I, D: Drawer<I>> everywhere starts to begin cumbersome.

I don't know what your actual code does, but the snippet you posted is over-using lifetimes. try to reduce intermediate "reference" wrapper types, like Args, and use elided lifetimes whenever possible. the- original example can be reduced to:

The problem with that is that in the whole code, I'm abstracting the args that I have to pass to a function, and those args can be multiple.

but you do need to call the Drawer::draw() method, which requires you to create a Drawer::Input value, doesn't that count as "being generic over the input"?

maybe that's just personal preference, but I don't think it is that much a difference, if your code is agnostic about the input type (which I don't think is very likely), you only mention the generic type Input once in the impl<Input, D: Drawer<Input>> header, no big deal.

but more likely, your code have to use the type somewhere, if it were an associated type, you use D::Input, if it were a generic parameter, you use Input, you don't save much indeed.

you don't need a dedicated type for that, you can just use tuples, and still take advantage of the lifetime elision rules.

trait Drawer<Input> {
	fn draw(&self, input: Input);
}
impl Drawer<(&i32, &i32)> for DrawerImpl {
	fn draw(&self, input: (&i32, &i32)) {
	}
}

my point is, you're over-using lifetimes but missing the opportunity of lifetime elision, that's the main cause of the complexity of your design.

It's quite hard to explain, I'll try.

What I have is a Drawer trait, a struct (named controller, abstracting the interactions between the business logic, the rendering and the input handling) using a generic parameter D: Drawer (along with 3 other generic parameters, that's why I find adding a fourth one cumbersome), and a third struct, specific for each implementation, that instantiates the various components and starts the application. The one responsible for instantiating the args struct is the last one, but it doesn't provide the input to the controller, it uses directly the drawer of the controller in a method, so there is no need for the controller to know the input of the drawer.

It's quite hard to explain from the smartphone, sorry :confused:

well, it sounds complicated indeed. it'd be easier to grasp with code as example.

no worries, feel free to add more details when it's convenient.

I created a playground with all the condensed code.

This is part of the skeleton of my application, with all the useless details omitted. I feel like there is a design flaw somewhere, but I cannot see it.

The basic idea is that I have a game and I want to abstract it from the game engine, to allow rendering it both in the terminal and in a GUI.

To do so, I've splitted the logic into the following components:

  • controller, the struct that receives the input from the engine and redirects it to the logic. The common part is in a CoreController, and then I instantiate a controller more coupled with the engine and have a CoreController as a field
  • drawer, that holds all the data needed to draw. On every frame, a drawing context is requested from the drawer, and it is passed to the game logic to give a short living object that knows how to draw. The idea here is to store in the context all the transient data that have the lifetime of the current frame (like canvas/terminal frame)
  • game logic, that is responsible of receiving updates every frame and rendering itself calling the drawing primitives of the drawing context

Hope this is clearer.

1 Like

This is how it can be made to work with the 2 lifetime parameters

#![allow(dead_code)]

// traits
trait Drawer<'c1, 'c2, _OutliveBounds = &'c1 &'c2 ()> {
    type Input;
    type Context<'s>: DrawerContext
    where
        Self: 's;

    fn drawing_context(&self, c: Self::Input) -> Self::Context<'_>;
}

trait DrawerContext {
    // not really important
    fn draw(&mut self);
}

// impl 1

// these are provided by the library I use
// context, I receive a &mut to it on every frame
struct Context;
// canvas on which to draw, created from the context and dropped at the end of the frame
// note that Canvas has no lifetime, this library does not store any reference to context
// in the canvas
struct Canvas;

impl Canvas {
    fn finish(self, _c: &mut Context) {}
}

struct Drawer1Impl {
    // data omitted since they are not relevant
    data: (),
}

struct Drawer1Args<'c> {
    ctx: &'c Context,
    canvas: &'c mut Canvas,
}

impl<'c> Drawer1Args<'c> {
    pub fn new(ctx: &'c Context, canvas: &'c mut Canvas) -> Self {
        Drawer1Args { ctx, canvas }
    }
}

impl<'c1, 'c2> Drawer<'c1, 'c2> for Drawer1Impl {
    type Input = Drawer1Args<'c1>;
    type Context<'s> = Drawer1ContextImpl<'c1, 's>;

    fn drawing_context(&self, c: Self::Input) -> Self::Context<'_> {
        Drawer1ContextImpl {
            ctx: c.ctx,
            canvas: c.canvas,
            drawer: self,
        }
    }
}

pub struct Drawer1ContextImpl<'c, 'd> {
    drawer: &'d Drawer1Impl,
    ctx: &'c Context,
    canvas: &'c mut Canvas,
}

impl<'c, 'd> DrawerContext for Drawer1ContextImpl<'c, 'd> {
    fn draw(&mut self) {}
}

// impl 2

// library 2

// terminal and frame are provided by the library

// the terminal is always available
struct Terminal;

impl Terminal {
    fn draw<F: FnOnce(&mut Frame)>(&mut self, f: F) {
        let mut frame = Frame(self);
        f(&mut frame);
    }
}

// the frame is provided by the terminal while drawing
// note that this has a reference to the terminal inside
struct Frame<'a>(&'a mut Terminal);

struct Drawer2Args<'a1, 'a2> {
    frame: &'a1 mut Frame<'a2>,
}

impl<'a1, 'a2> Drawer2Args<'a1, 'a2> {
    fn new(frame: &'a1 mut Frame<'a2>) -> Drawer2Args<'a1, 'a2> {
        Drawer2Args { frame }
    }
}

struct Drawer2Impl;

impl<'c1, 'c2> Drawer<'c1, 'c2> for Drawer2Impl {
    // here I should have two lifetimes since &'a mut Frame<'a>
    // in Drawer2Args is likely to be wrong
    type Input = Drawer2Args<'c1, 'c2>;

    type Context<'s> = Drawer2Context<'c1, 'c2, 's>
    where
        Self: 's;

    fn drawing_context(&self, c: Self::Input) -> Self::Context<'_> {
        Drawer2Context {
            drawer: self,
            frame: c.frame,
        }
    }
}

struct Drawer2Context<'c1, 'c2, 'd> {
    frame: &'c1 mut Frame<'c2>,
    drawer: &'d Drawer2Impl,
}

impl<'c1, 'c2, 's> DrawerContext for Drawer2Context<'c1, 'c2, 's> {
    fn draw(&mut self) {}
}

// controller and logic

// game logic, decoupled from the rendering engine via drawer context
trait Logic<DC> {
    fn render(&mut self, _dc: DC);
}

struct GameLogic;

impl<DC: DrawerContext> Logic<DC> for GameLogic {
    fn render(&mut self, mut dc: DC) {
        dc.draw();
    }
}

// common controller code, that reads inputs using the engine and updates/renders
struct CoreController<D: for<'c1, 'c2> Drawer<'c1, 'c2>> {
    drawer: D,
    game_logic: Box<dyn for<'c1, 'c2, 's> Logic<<D as Drawer<'c1, 'c2>>::Context<'s>>>,
}

// controller for engine 1
struct Controller1(CoreController<Drawer1Impl>);

impl Controller1 {
    fn render(&mut self, ctx: &mut Context) {
        // this theoretically should get a canvas from ctx
        let mut canvas = Canvas;

        let ct = Drawer1Args::new(ctx, &mut canvas);
        let dc = self.0.drawer.drawing_context(ct);
        self.0.game_logic.render(dc);
        canvas.finish(ctx);
    }
}

// controller for engine 2
struct Controller2(CoreController<Drawer2Impl>);

impl Controller2 {
    fn render(&mut self, term: &mut Terminal) {
        term.draw(|frame| {
            // this does not compile
            let args = Drawer2Args::new(frame);
            let mut dc = self.0.drawer.drawing_context(args);
            dc.draw();
        });
    }
}

fn main() {}

I haven’t found a way yet to make rust happy without the usage of HRTB instead of GAT for the 2 lifetimes in question, and without using the _OutliveBounds = &'c1 &'c2 () hack. The attempt to just use a GAT had me running into a lifetime error with a “nice” remark that this is a known limitation that will be removed in the future (see issue #100013 <https://github.com/rust-lang/rust/issues/100013> for more information) stemming from the game_logic trait object.

You wrote above that

from which it isn’t clear what exactly the issue you ran into was when trying to employ 2 lifetime arguments (it doesn’t quite sound it was the same I ran into), but maybe with the approach laid out in the code above, it just works for you. :man_shrugging:

One way to reduce the number of lifetimes is to type-erase the greater lifetime behind a trait that does everything you need to do.

trait FrameStuff {
    // all the stuff you planned to do with your `&mut Frame<'_>`
    fn frame_stuff(&mut self);
}

impl FrameStuff for Frame<'_> {
    fn frame_stuff(&mut self) {}
}

// Now you can use `&'a mut dyn FrameStuff` instead of `&'a mut Frame<'f>`

I think this is all I changed:

 struct Drawer2Args<'a> {
-    frame: &'a mut Frame<'a>,
+    frame: &'a mut dyn FrameStuff,
 }

 impl<'a> Drawer2Args<'a> {
-    fn new(frame: &'a mut Frame<'a>) -> Drawer2Args<'a> {
-        Drawer2Args { frame }
+    fn new(frame: &'a mut Frame<'_>) -> Drawer2Args<'a> {
+        Drawer2Args { frame: frame as _ }
     }
 }

 struct Drawer2Context<'c, 'd> {
-    frame: &'c mut Frame<'c>,
+    frame: &'c mut dyn FrameStuff,
      drawer: &'d Drawer2Impl,
 }

It was probably easy to miss in the conversation, so no worries about the slight duplication: Just to avoid potential confusion on OPs end, let me point out that this this is indeed essentially the same thing I had outlined above:

2 Likes

Ah yep, miss it I did :sweat_smile:

Thank you both for your suggestions.

@steffahn I'm no expert about Rust, but from what I can understand the _OutliveBounds = &'c1 &'c2 () is a workaround for expressing that 'c1 must outlive 'c2 without polluting the definition of the GAT with the two lifetimes and simultaneously binding them to something, am I correct?

@quinedot thanks for the code snippet, I'll check if that works for me. I tried to avoid trait objects where possible since that code should run at least 60 times per second, but I'll check the performances of the result.

I cannot say it's a flaw, but for my personal taste, it's too much of "OO" flavor in it. part of the problem is due to many levels of indirection (which is typical in other OO languages like java, C#, etc).

I don't know the details, but at least the compile error of the playground link can be eliminated with only these changes:

 impl Terminal {
-    fn draw<F: FnOnce(&mut Frame)>(&mut self, f: F) {
+    fn draw<F: FnOnce(Frame)>(&mut self, f: F) {
         let mut frame = Frame(self);
-        f(&mut frame);
+        f(frame);
     }
 }
 struct Drawer2Args<'a> {
-    frame: &'a mut Frame<'a>,
+    frame: Frame<'a>,
 }
 struct Drawer2Context<'c, 'd> {
-    frame: &'c mut Frame<'c>,
+    frame: Frame<'c>,
     drawer: &'d Drawer2Impl,
 }

in typical OO style design, it's very tempting to make aggregated structs on the go (mostly for convenience), but when it comes to rust, lifetimes make the problem very obvious, as can be seen from this particular example, especially when exclusive references are involved, because they are invariant over the borrowed type (but covariant over the lifetime), this often cause problem when the type itself contains a lifetime parameter, and type in the form &'x mut Foo<'x> often results in tricky compile errors.


some observations I want to mention:

  • coupling of CoreController and Drawer

    I don't know why you need the backend Drawer in the CoreController, this forces the sandwich structure: Controller1 -> CoreController -> Drawer1 and Controller2 -> CoreController -> Drawer2.

    if you can refactor the drawer out of the CoreController, (maybe move it into the concrete Controller1 and Controller2), you might be able to simply the Drawer trait a little bit.

  • inverse of control (or, who's in charge of the main loop)

    from the playground code, I see your GameLogic is implemented as a frame callback, so I assume your library is in control of the main loop. personally, I don't like this type of design, because it's less flexible, and hardly composable. I'd prefer that libraries provide the basic building blocks, but let the app logic to assemble them together. sometimes you hear people talking about "frameworks" vs "toolkits".

  • associated types, but associated to whom?

    in this example, what you really want is a generic way to create a DrawerContext from a Drawer and some backend specific states (graphics resource, pty device, etc), you chose to put this functionality into the Drawer trait, but did you consider alternatives, for example, put them in the DrawerContext trait instead? something like this:

    pub trait DrawerContext {
        type Drawer: crate::Drawer;
        type DeviceContext;
        fn begin_frame(drawer: &mut Self::Drawer, ctx: &mut Self::DeviceContext) -> Self;
    }
    

    since your DrawerContext are intended to be ephemeral, the lifetime situation is much simpler to deal with (probably can all be elided).

Thanks for the observations!

It's surely an OO design, since I'm a Python/Typescript/C# programmer, in fact I suspected there is something smelly here regarding the design.

coupling of CoreController and Drawer

The main problem here is that CoreController contains the game logic, that is generic over the drawing context. Removing the drawer from that struct surely helps, but then I have this:

struct CoreController<DC> {
    game_logic: Box<dyn Logic<DC>>,
}

struct Controller1(CoreController<Drawer1ContextImpl>, Drawer1Impl);

where the compiler complains that Drawer1ContextImpl has unspecified lifetimes. Those lifetimes are not bound to the life of Controller1, and Drawer1ContextImpl is not a trait, so I can't find a way to express this.

inverse of control (or, who's in charge of the main loop)

I have little control over this, as the gui library requires me to implement a trait and calls the method of my logic. I have no control over the loop, I just start it.

associated types, but associated to whom?

In this case I could remove the drawer trait at all, since its only functionality was as a factory of drawing contexts. The problem with that change is that I should either move the drawer in the specific controller (see the first point) or declare the DeviceContext type upfront when parametrizing the core controller, something like CoreController<D, DV, DC = DrawerContext<DeviceContext=DV, Drawer=D>>. Also this does not remove the problem that the concrete drawer context structs need to borrow the drawer, making them generic over lifetimes.

Maybe I misunderstood something in your suggestions, or a major refactoring is needed to actually make use of them, I did not extensively try.

as I said, I'm really not a fan of OO heavy designs. your entire example code feels incomprehensible to me. there's just too many design choices that I don't understand the rationale. for example:

why you put an trait object into the CoreController? why not just store an generic "game logic"? also, why the Logic trait is parameterized with a Drawer type, while all you can use to render frames is the DrawerContext?

yes, the Drawer trait is the main cause of the complication, because of all the GAT lifetime madness. removing it makes great simplification.

why do you want to constrain the associated types? why not omit them:

struct CoreController<DC: DrawerContext> {}

another example, the associated type Drawer::Input<'c> doesn't use bounds, at least in the posted snippets, it's not generic in anyway, and each backend would use the concrete type anyway, why go through all the GAT and lifetime hassle? I don't see a point.

it's to be expected, and don't be afraid to do it. that's just the learning process we all have to go through.

but I think fundamentally, you are shoehorning a familiar design paradigm from other language into a new language. you'll get more and more idiomatic as you gain better understanding of the language concepts.


finally, here's a simplified version that mimics the structure of your original code:

although the actual code is not as simple, I'm almost certain you don't need at all to use GATs as you originally did, as you don't really need to construct the DrawerContext in any generic fashion, and those xxxArgs structs are just red herrings too.

in fact, I would even doubt the necessity of the DrawerContext thingy. most of the lifetime complication come from the fact that you are trying to put references into struct fields (out of OO habit, I would assume). if I were to design similar library, I would probably do something as simple as:

/// for games to read user input states, e.g. is key pressed
///
/// I guess the `CoreController` serves similar purpose
pub struct InputState {}

/// for games to render contents
///
/// this is like `DrawerContext` and `Drawer` combined
pub trait Renderer {
	fn draw_text(&mut self);
	fn draw_sprite(&mut self);
}

/// the callbacks a game must implement
///
/// no need for a separate `DrawerContext` type with complicated lifetimes as
/// wrapper for `&InputState` or `&mut Renderer`, just use plain references
///
/// the two method can also be combined into one.
pub trait Game {
	/// advance the simulation
	fn update(&mut self, dt: std::time::Duration, input: &InputState);
	/// render the output
	fn render(&mut self, renderer: &mut impl Renderer);
}

/// backend integration with platform specific api
/// games can also use this during initialization, e.g. to load resources
pub trait Backend {
	type Renderer: self::Renderer;
	/// enter the main loop
	/// this can be sealed with a private token so only the core can invoke
	/// or this can be separated into another (private) trait
	/// the `InputState` can be replaced with more complicated states that is
	/// managed by the platform independent common code, this is just an example
	/// how the core can use states if the main loop is managed by the platform
	fn run(self, input: &mut InputState, game: impl Game) -> !;
}

an typical backend implementation probably looks like:

mod dummy {
	struct Platform;
	struct Renderer {}
	struct Backend {
		renderer: Renderer,
		system: Platform,
	}
	impl crate::Backend for Backend {
		type Renderer = Renderer;
		fn run(mut self, input: &mut crate::InputState, game: impl crate::Game) -> ! {
			loop {
				//	use platform specific api to poll the event queue and translate
				//	to updates of the platform independent `InputState`
				//	may also be implemented using callbacks
				//
				// self.system.poll_events(|event| {
				// 	match event {
				//			Event::KeyDown(key) => { todo!() }
				//			//...
				// 	}
				// });				let dt = todo!("calculate time step since previous frame");

				// prepare to draw a new frame, e.g. to clear the frame buffer
				self.renderer.begin_frame();
				game.update(dt, input);
				game.render(&mut self.renderer);
				// present back buffer, vsync, frame pacing, etc.
				self.renderer.end_frame();
			}
		}
		impl crate::Renderer for Renderer {
			//...
		}
	}
}

why you put an trait object into the CoreController ? why not just store an generic "game logic"? also, why the Logic trait is parameterized with a Drawer type, while all you can use to render frames is the DrawerContext ?

I have different Logic implementations, one for a player controller logic, one for a logic receiving its game data from the network (for when the user is playing vs a remote player). The Logic trait has the fn render(&mut self, drawing_context: DC) -> Result<(), DC::Error>; method, that's why I need a generic argument.

The main reasoning behind Drawer/DrawerContext is that usually a render starts with an object created (a Canvas for the gui part and a Frame for the terminal one), and what I tried to express with my API is a rendering engine agnostic way of having an object created by a rendering engine aware struct (Drawer) that has contextual information about the current drawing context.

Your example surely works, but it doesn't encode in the type signature the fact that begin_frame must be called before rendering and end_frame must end the drawing. What I wanted to do is something like MutexGuard, to have an object whose lifetime defines the boundary in which the rendering operations must happen.

FYI, that’s basically just part of the pre-GAT workarounds so sort-of “emulate” GATs that are generic over lifetimes. A single trait trait Tr { type Ty<'a>; } becomes a parametrized trait trait Tr<'a> { type Ty<'a>; } and the trait bound used becomes Foo: for<'a> Tr<'a>; but then there’s no direct way to bound the lifetime 'a, so something like type Ty<'a> where Self: 'a or type Ty<'x, 'y> where 'x: 'y becomes hard to emulate.

However, HRTBs with for<…> Trait<…> can have implicit bounds, like when you write for<'a> Trait<'a, &'a Self>, it’s implicitly only counting the for<'a> for lifetimes with Self: 'a. Or like here, with &'c1 &'c2 (), that’s creating 'c2: 'c1; this bound is also necessary for defining the type as Drawer2Args<'a1, 'a2> because that requires a 'a2: 'a1 bound for the contained &'a1 mut Frame<'a2>.

The neat thing about a type parameter with a default is that you don’t even have to write out these things, you get for<'c1, 'c2> Drawer<'c1, 'c2>> to be syntactic sugar for for<'c1, 'c2> Drawer<'c1, 'c2, &'c1 &'c2 ()>> which thus has the necessary restriction.

Now, why use this instead of an approach like the following?

trait Drawer {
    type Input<'c1, 'c2>
    where
        'c2: 'c1;
    type Context<'c1, 'c2, 's>: DrawerContext
    where
        Self: 's,
        'c2: 'c1;
}

The problem arises in the type Box<dyn for<'c1, 'c2, 's> Logic<<D as Drawer>::Context<'c1, 'c2, 's>>>, which fails with an error complaining about the fact that this does apparently not work with an implicit 'c2: 'c1 bound (and a remark “this is a known limitation that will be removed in the future”). In this sense, true GATs are still worse than their workaround, which is something that can occasionally (like here) prove very useful.

Writing the type as Box<dyn for<'c1, 'c2, 's> Logic<<D as Drawer<'c1, 'c2>>::Context<'s>>>, with the implicit _OutliveBounds = &'c1 &'c2 () argument of Drawer does come with the necessary implied bound.


I’m still open to hear feedback on how the code I’ve shown does or doens’t translate to your actual use-case, feel free to come back with feedback or further issues :slight_smile:

Thanks for the detailed response, going to study it!