Trait bounds for `Fn` returning a future that forwards the lifetime of the `Fn`'s arguments

Hi there!

I'm currently trying to figure out whether there is some way to express this borrow in some way:

use std::future::Future;
use futures::future::BoxFuture; // 0.3.16

type BoxedTaskFn<A, R> = Box<
    dyn for<'a> FnOnce(&'a mut A) -> BoxFuture<'a, R> + Send
>;

fn run_boxed_task<A, R>(task: BoxedTaskFn<A, R>) {
    todo!()
}

fn run_task<A, R, T, F>(task: T)
where
    T: for<'a> FnOnce(&'a mut A) -> F + Send + 'static,
    F: Future<Output = R> + Send + 'a, // <- 'a obviously not declared here
{
    todo!()
}

Essentially, I want run_task to express the same borrow semantic as the boxed variant, but without the boxes. However, because the return value in the second variant is another type argument instead of being dyn (where I can place the 'a), I cannot use the HRTB lifetime 'a in the bounds for F. What I want to express is pretty much

T: for<'a> FnOnce(&'a mut A) -> (impl Future<Output = R> + Send + 'a) + Send + 'static

except that impl isn't allowed in this position. Is it possible to express this somehow? My intuition is that it's not, because F would need to be higher kinded here in order to make this work (F<'a>)?

I was able to get something compiling by factoring out a helper trait + blanket impl:

use std::future::Future;

trait Helper<'a, A: ?Sized> {
    type Future: Future + Send + 'a;
}

impl<'a, A, Fut, T> Helper<'a, A> for T
where
    A: 'a + ?Sized,
    Fut: Future + Send + 'a,
    T: (FnOnce(&'a mut A) -> Fut) + Send + 'static,
{
    type Future = Fut;
}

fn run_task<A: ?Sized>(_task: impl for<'a> Helper<'a, A>) {
    todo!()
}

Does that do the job?

Edit: added some ?Sized

2 Likes

Thanks -- that's an interesting approach!

A problem with this is that the compiler isn't smart enough to propagate the T: FnOnce(...) bound into the function, so I had to add FnOnce as a supertrait as well to be able to call task as a function:

use std::future::Future;
use tokio; // 1.9.0

trait Helper<'a, A: 'a, R>: (FnOnce(&'a mut A) -> Self::Future) + Send + 'static {
    type Future: Future<Output = R> + Send + 'a;
}

impl<'a, A, R, Fut, T> Helper<'a, A, R> for T
where
    A: 'a,
    Fut: Future<Output = R> + Send + 'a,
    T: (FnOnce(&'a mut A) -> Fut) + Send + 'static,
{
    type Future = Fut;
}

async fn run_task<A, R>(a: &mut A, task: impl for<'a> Helper<'a, A, R>) -> R {
    let future = (task)(a);
    future.await
}

#[tokio::main]
async fn main() {
    let mut state = 123u32;
    run_task(&mut state, |s: &mut u32| async {
        *s += 1;
    })
    .await;
    dbg!(state);
}

However, now I'm running into an issues where, for some reason, rustc claims that my closure is implemented for a specific lifetime, which, as far as I see it, shouldn't actually be the case?

error: implementation of `Helper` is not general enough
  --> src/main.rs:25:5
   |
25 |     run_task(&mut state, |s: &mut u32| async {
   |     ^^^^^^^^ implementation of `Helper` is not general enough
   |
   = note: `[closure@src/main.rs:25:26: 27:6]` must implement `Helper<'0, u32, ()>`, for any lifetime `'0`...
   = note: ...but it actually implements `Helper<'1, u32, ()>`, for some specific lifetime `'1`

error: could not compile `playground` due to previous error

Playground Link

How about this?

use std::future::Future;

trait Helper<'a, A: ?Sized> {
    type Future: Future + Send + 'a;
    
    fn call_once(self, param: &'a mut A) -> Self::Future;
}

impl<'a, A, Fut, T> Helper<'a, A> for T
where
    A: 'a + ?Sized,
    Fut: Future + Send + 'a,
    T: (FnOnce(&'a mut A) -> Fut) + Send + 'static,
{
    type Future = Fut;
    
    fn call_once(self, param: &'a mut A) -> Self::Future {
        (self)(param)
    }
}

async fn run_task<A: ?Sized, T: for<'a> Helper<'a, A>>(
    task: T,
    param: &mut A,
) -> <<T as Helper<'_, A>>::Future as Future>::Output {
    task.call_once(param).await
}

Heh, this was my first approach to this as well before I realized that I can just require FnOnce as a supertrait and rewrote it. Unfortunately, it runs into the very same issue when you try to actually call it.

Edit: Or another playground here with your more elegant approach that avoids the additional R type argument, but eventually runs into the same error.

1 Like

Ah, interesting. Seems to be an issue with closures—if I define an async fn my_task(state: &mut u32) that does the same thing then run_task(my_task, &mut state) compiles fine. For some reason the lifetime parameter in the inferred signature for the closure is early-bound where we want it to be late-bound (?). I don't know how to get around that.

@Yandros this seems like your wheelhouse :‌)

2 Likes

You've already provided the right answers to the issues at hand :upside_down_face:, except for an "explanation" to the encountered difference between async fn and |...| async move { ... }.

It turns out there are two outstanding bugs w.r.t. these higher-order signatures and bounds related to async closures:

  • Can't have a higher-order bound on an associated type

    The first issue, is related to higher-order bounds in general, not necessarily related to closures. Basically, you can't introduce, within a function, an extra bound on a "higher-order assoc type" such as for<'a> <T as Helper<'a, ...>>::Future : Unpin, or for<'a> <F as FnMut<(&'a mut A,)>>::Output : Future<...>. I mean, you can, but no type will be able to meet the desired bounds, as showcased by the following example:

    trait Helper<'lt> {
        type Assoc;
    }
    
    fn foo<T> ()
    where
        for<'any>
            T : Helper<'any, Assoc = ()>
        ,
        // While the equality passes, the `Copy` doesn't!!
        for<'any>
            <T as Helper<'any>>::Assoc : Copy
        ,
    {}
    
    impl Helper<'_> for () {
        type Assoc = (); // is indeed `Copy`
    }
    
    const _: () = {
        let _ = foo::<()>; // Error the trait `for<'any> Copy` is not
                           // implemented for `<() as Helper<'any>>::Assoc`
    };
    

    I think the issue is called / related to "lazy normalization", for those interested.

    Note that this isn't the error that this thread has stumbled upon when using future-yielding-closures, since by using a helper trait, with a preemptive bound (: Future... + Send) on the associated Fut type, this issue was dodged :slight_smile:.

  • Closures are almost never higher order, which affects future-returning closures.

    The second issue is that you can't produce, with a closure, an anonymous future (e.g., using async [move] { ... }) that (re)borrows from the input parameters, at least not easily.

    To see why, consider this way simpler example:

    let first = |xs: &'_ [i32]| -> Option<&'_ i32> { // <-----------------------+
        xs.get(0)                                                            // |
    };                                                                       // |
    let elems = [42, 27];                                                    // |
    dbg!(first(&elems)); // <- Infers the lifetime `'_` to be that of `elems` --+
    
    let _: Option<&'static i32> = first(&[]); // Error
    

    Basically, closures are very rarely higher-order: they, instead, use lifetime inference to be able to be called at least once in a way that works, which is why we rarely observe this limitation from closures.

    Moreover, the only frequent situation where one could have hit this issue with closures is when feeding them to functions that do showcase higher-order closure bounds; but it turns out that for this very case, there is some kind of compiler-magical "back-pressure" from that closure bound which nudges the closure into becoming higher-order.

    This is showcased by the funneling-into-higher-order-ness trick:

    /// This is the *identity* function, but one which
    /// only takes higher-order closures as input, adding the constraint.
    /// Literally a funnel.
    fn higher_order_funnel<F> (f: F)
      -> F
    where
        F : FnOnce(&'_ [i32]) -> Option<&'_ i32>,
    {
        f
    }
    
    let first = higher_order_funnel(|xs: &'_ [i32]| -> Option<&'_ i32> {
        xs.get(0)
    });
    
    let elems = [42, 27];
    dbg!(first(&elems));
    
    let _: Option<&'static i32> = first(&[]); // OK
    

    With this, encountering a compiler-error caused by a non-higher-order closure, in the most common cases, is very rare; which is the reason these properties about closures are not that well-known.

    The real issue is, this "back-pressured higher-order promotion" only happens with the closure traits (Fn... traits) verbatim: any kind of "equivalent subtrait" (e.g., the typical trait alias trick of subtrait + blanket impl) won't be able to "nudge the closure into higher-order-ness", such as with your : for<'a> Helper<'a, ...> example.

    • Note that async fns, on the other hand, are easily higher-order, which is why it works with them.

So, to summarize:

  • If we use a concrete future type such as a BoxFuture<'_, ...>, the simple and direct Fn... bounds suffice to make the callee's code work, and by using these direct Fn... bounds, when feeding |...| async move { ... }.boxed() closures, those get nudged into being higher order and everything Just Works™.

  • In the other scenarios, we either hit the lazy normalization bug (on top of requiring nightly to be able to name (to bound it!) the Output associated type of the Fn... family) when directly using the Fn... traits and adding bounds on the return type, or we go through a helper proxy trait but then our closure doesn't get to be higher-order promoted.

That's the sad / painful situation with "async closures" as of today: boxing is currently required. Note that boxing already implicitly happens when using #[async_trait], which means most of the futures out there are already boxed anyways; and the code remains fast and performant nonetheless, since the size of those futures is, in practice, almost always very small, and thus these extra heap-allocations have thus a quite negligible cost :wink:

  • I suspect it could be possible, with min_type_alias_impl_trait, and some macros, to be able to "name" the existential future type returned by the async closure, so as to be able to write ad-hoc funnelers and hopefully be able to define, with it, higher-order async closures... But it's a bit late for me, and I don't have that much free time, so I won't be testing this theory yet.
3 Likes

Thanks for the extensive write-up -- this makes much more sense now! :slight_smile:

It's a bit of an unfortunate limitation, but being honest, I'd probably not want to use the helper trait based solution in any public interface anyways. It's a nice proof-of-concept, but simply too complicated and guaranteed to confuse most users quite a bit when it appears in the signature of a function. Looks like I'll be boxing futures in cases like this for now.

I generally try to avoid dynamic dispatch (and boxing) wherever I can. I'm less worried about the overhead of the allocation, but rather with the effective optimizer barrier that any call through a dyn interface represents, as well as the inability to inline through that barrier. In all hot paths of my app, instead of using async_trait, I generally use GAT + type_alias_impl_trait. Anyways, I disgres. :slight_smile:

Is there any future addition to the language planned that would make this function signature expressible without a helper trait, by any chance?

1 Like

Yes, I was wondering whether something like this might be possible!

Thanks, this is a great explanation (along with the stuff about closures not usually satisfying higher-order bounds).

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.