error[E0308]: mismatched types
--> src/main.rs:6:4
|
6 | test_lifetime(ashow);
| ^^^^^^^^^^^^^^^^^^^^ one type is more general than the other
|
= note: expected opaque type `impl for<'l> Future<Output = ()>`
found opaque type `impl Future<Output = ()>`
= help: consider `await`ing on both `Future`s
= note: distinct uses of `impl Trait` result in different opaque types
note: the lifetime requirement is introduced here
shouldn't the function ashow implement trait for<'l> Fn(&'l str)->impl Future for any lifetime 'l? Why does the compiler say the type of the argument passed to test_lifetime does not satisfy the requirement?
Look at the diagnosis carefully, the compiler seems not to complain that the whole type of ashow does not satisfy the requirement, but the return type of the async function, what's the reason here? However, the issue is relevant to the lifetime of the Trait bound. If changing example to something like the following, the code will be compiled
In order to call a function, all of its generic arguments must be specified (usually implicitly). But in
fn test_lifetime<F,U>(_:F) where F: for<'l> Fn(&'l str)->U{}
there is no single type U which equals the type ashow returns, because that type captures the lifetime 'l. But given <U> ... for<'l>, U is outside the scope of for<'l> and can't possibly capture it.
You can use async_fn_traits to dodge this problem by not mentioning the future type:
fn test_lifetime<F>(_: F)
where
F: for<'l> async_fn_traits::AsyncFn1<&'l str>,
{}
Let me try to understand your answers. Do you mean, because the Future returned by ashow captures its parameter with type &'l str, hence the returned Future should satisfy ReturnedFuture: 'l, and because this type is denoted by U, however, we cannot nominate the high-ranked lifetime 'l introduced in for<'l> in the trait bound like U:'l, hence the U is mismatched with ReturnedFuture that satisfied ReturnedFuture: 'l, right?
The key issue is that the signature of test_lifetime lacks the trait bound U:'l
@kpreid I tried to simplify the AsyncFn1 to see why it does work. Like the following:
use std::future::Future;
trait MyFuture<T>{
type OutputFuture;
type Output;
}
impl<F,Fut,T> MyFuture<T> for F
where F: Fn(T)->Fut, // #1
Fut:Future
{
type OutputFuture = Fut;
type Output = Fut::Output;
}
fn test_lifetime<F>(_:F) where F: for<'l> MyFuture<&'l str>{}
async fn ashow(_:& str){}
fn main(){
test_lifetime(ashow);
}
However, look at #1, It still does not have a similar trait bound like Fut:'l, why does it work? I suspect the key point is that the implementation uses a type parameter that is not a reference. So, I verify this point with this code:
No, the key issue is that type parameters like U must resolve to a single type, whereas a function where the output captures an input lifetime has a different return type for every input lifetime, because types that differ only by lifetime are still distinct types.
There is no lifetime to name in that context,[1] so a bound doesn't make sense; why would something that doesn't make sense be required for things to work?
You have some misunderstanding but I'm failing to grasp what it is.
In this version of the code, the ashow you pass to test_lifetime is coerced to a supertype that works with one specific lifetime, ala this more explicit call.
test_lifetime::<_, &'static str, _>(ashow);
The non-higher-ranked bound won't work if you need it to work with all lifetimes, or with lifetimes unnameable by the caller -- such as borrows local to the function body.
besides 'static, which is a bound you do not want âŠī¸
Oh, you meant , since the Future returned by ashow captures its parameter for any lifetime 'l, that is, the returned Future is an infinite list of types that varies on lifetimes, the type parameter U must be determined to be a single type in the monomorphization step, hence U cannot denote such a high-ranked type.
By the way, Why does the difference that changes &'l str to Arg make the compiler accept it?
Because there's a solution in that case -- an infinite number of solutions, just pick any lifetime for Arg = &'lifetime str and U = _UnnameableFuture<'lifetime>.
Since a type parameter must be a single type, hence the compiler must select a concrete lifetime to form a specific type such that Arg and U can denote the corresponding type, right? The compiler prefers to choose 'static
I don't know that it has a preference in this case, where there are literally no other constraints and any lifetime will do. Since there are no other contraints, it's also irrelevant; it need not choose one specific lifetime, it just needs to prove that one or more solution exists.
Ok, the 'lifetime in your above comment meant an arbitrary specific lifetime, which does not mean any lifetime, this is different from 'a in for<'a>, it denotes an infinite list of lifetimes. In other words, a type parameter can never denote a high-ranked type like for<'a> TypeCtor<'a>.
The compiler prefers to choose 'static
This is my misunderstanding. The compiler selects an appropriate lifetime for the context in the same manner as the compiler selects a lifetime for a function in a call
@quinedot Back to this example, why does this example work? As said above, a type parameter can only denote a specific type, however, in this example, the trait bound uses MyFuture<&'l str> such that the type parameter T in the implementation of MyFuture for F should denote &'l str that is an infinite type.
Did I misunderstand this example? Is the reason that the compiler selects a specific lifetime for the implementation?
There's no infinite type in that code. Instead for<'l> MyFuture<&'l str> represents an infinite amount of traits, which is fine because you have a list of traits anyway.
But the moment that the &'l str is used as a type the 'l is already "fixed", so the type is only one &'l str, not both &'0 str, &'1 str, etc etc all at the same time). Formally speaking 'lis part of the "context", in which the type&'l stris defined. Of course by changing the context you can get different types, but if the context is fixed then the type&'l stris fixed too. Compare this to theU` in your very first example, where it is required to be multiple types at the same time and in the same context.
Mathematically speaking asking for U is like asking for a single number n, while the example with &'l str is like saying "I'll give you a number n (the lifetime 'l) and for each number I give you you need to give me one back (the &'l str)". In the first case the number n is only one, it can't be both 0 and 1 at the same time, which is however what's needed later on. In the second example you can ask giving an infinite amount of number ns, each time the answer will be one single number m, but the answer can give different number ms to requests with different ns.