Trait bounds for `Fn` returning a future that forwards the lifetime of the `Fn`'s arguments

You've already provided the right answers to the issues at hand :upside_down_face:, except for an "explanation" to the encountered difference between async fn and |...| async move { ... }.

It turns out there are two outstanding bugs w.r.t. these higher-order signatures and bounds related to async closures:

  • Can't have a higher-order bound on an associated type

    The first issue, is related to higher-order bounds in general, not necessarily related to closures. Basically, you can't introduce, within a function, an extra bound on a "higher-order assoc type" such as for<'a> <T as Helper<'a, ...>>::Future : Unpin, or for<'a> <F as FnMut<(&'a mut A,)>>::Output : Future<...>. I mean, you can, but no type will be able to meet the desired bounds, as showcased by the following example:

    trait Helper<'lt> {
        type Assoc;
    fn foo<T> ()
            T : Helper<'any, Assoc = ()>
        // While the equality passes, the `Copy` doesn't!!
            <T as Helper<'any>>::Assoc : Copy
    impl Helper<'_> for () {
        type Assoc = (); // is indeed `Copy`
    const _: () = {
        let _ = foo::<()>; // Error the trait `for<'any> Copy` is not
                           // implemented for `<() as Helper<'any>>::Assoc`

    I think the issue is called / related to "lazy normalization", for those interested.

    Note that this isn't the error that this thread has stumbled upon when using future-yielding-closures, since by using a helper trait, with a preemptive bound (: Future... + Send) on the associated Fut type, this issue was dodged :slight_smile:.

  • Closures are almost never higher order, which affects future-returning closures.

    The second issue is that you can't produce, with a closure, an anonymous future (e.g., using async [move] { ... }) that (re)borrows from the input parameters, at least not easily.

    To see why, consider this way simpler example:

    let first = |xs: &'_ [i32]| -> Option<&'_ i32> { // <-----------------------+
        xs.get(0)                                                            // |
    };                                                                       // |
    let elems = [42, 27];                                                    // |
    dbg!(first(&elems)); // <- Infers the lifetime `'_` to be that of `elems` --+
    let _: Option<&'static i32> = first(&[]); // Error

    Basically, closures are very rarely higher-order: they, instead, use lifetime inference to be able to be called at least once in a way that works, which is why we rarely observe this limitation from closures.

    Moreover, the only frequent situation where one could have hit this issue with closures is when feeding them to functions that do showcase higher-order closure bounds; but it turns out that for this very case, there is some kind of compiler-magical "back-pressure" from that closure bound which nudges the closure into becoming higher-order.

    This is showcased by the funneling-into-higher-order-ness trick:

    /// This is the *identity* function, but one which
    /// only takes higher-order closures as input, adding the constraint.
    /// Literally a funnel.
    fn higher_order_funnel<F> (f: F)
      -> F
        F : FnOnce(&'_ [i32]) -> Option<&'_ i32>,
    let first = higher_order_funnel(|xs: &'_ [i32]| -> Option<&'_ i32> {
    let elems = [42, 27];
    let _: Option<&'static i32> = first(&[]); // OK

    With this, encountering a compiler-error caused by a non-higher-order closure, in the most common cases, is very rare; which is the reason these properties about closures are not that well-known.

    The real issue is, this "back-pressured higher-order promotion" only happens with the closure traits (Fn... traits) verbatim: any kind of "equivalent subtrait" (e.g., the typical trait alias trick of subtrait + blanket impl) won't be able to "nudge the closure into higher-order-ness", such as with your : for<'a> Helper<'a, ...> example.

    • Note that async fns, on the other hand, are easily higher-order, which is why it works with them.

So, to summarize:

  • If we use a concrete future type such as a BoxFuture<'_, ...>, the simple and direct Fn... bounds suffice to make the callee's code work, and by using these direct Fn... bounds, when feeding |...| async move { ... }.boxed() closures, those get nudged into being higher order and everything Just Worksโ„ข.

  • In the other scenarios, we either hit the lazy normalization bug (on top of requiring nightly to be able to name (to bound it!) the Output associated type of the Fn... family) when directly using the Fn... traits and adding bounds on the return type, or we go through a helper proxy trait but then our closure doesn't get to be higher-order promoted.

That's the sad / painful situation with "async closures" as of today: boxing is currently required. Note that boxing already implicitly happens when using #[async_trait], which means most of the futures out there are already boxed anyways; and the code remains fast and performant nonetheless, since the size of those futures is, in practice, almost always very small, and thus these extra heap-allocations have thus a quite negligible cost :wink:

  • I suspect it could be possible, with min_type_alias_impl_trait, and some macros, to be able to "name" the existential future type returned by the async closure, so as to be able to write ad-hoc funnelers and hopefully be able to define, with it, higher-order async closures... But it's a bit late for me, and I don't have that much free time, so I won't be testing this theory yet.