I know that the following standard example is refused by the borrow checker:
struct X<'a, T>{s: &'a T}
impl<'a, T> X<'a, T> {
fn foo(&'a mut self) {}
}
fn main() {
let s = String::new();
let mut x = X{s: &s};
x.foo();
x.foo();
}
The given explanation is: the implementation of 'foo' constrains the lifetime of the mutable borrow of 'self' to equal the lifetime of the held reference. But the held reference, and therefore the mutable borrow created at the first 'foo' call, lives through the second 'foo' call, which tries to create a second mutable reference (QED).
To me, this is a bit glib, and I feel like it's sweeping another possibility (which would compile) under the carpet. Let me explain my thought process.
The way I've been able to make sense of lifetime annotations is just to say that, going into each function call, the compiler has a notion of the types (including lifetimes) of the arguments, and it does the minimal amount of coercion to make them match the function signature. That is why, when you pass values x, y with lifetimes 'x, 'y to a function with signature (x: &'a X, y: &'a Y) the compiler doesn't puke, but just selects 'a to be the intersection of 'x and 'y and coerces x, y to &'a X, &'a Y for the duration of the function call.
In my example, let's say that the reference held by x has lifetime the entire body of main, call it 'main_body. It seems clear that the compiler is inferring a type X<'main_body, String> for x. Then at both 'foo' calls it's using the implementation of 'foo' for X<'main_body, String> which requires a &'main_body mut self, hence the error.
However, you could imagine a different logical flow. The compiler could reason instead that the two mutable borrows should have disjoint lifetimes, e.g. just the single lines that make use of them, 'line_8 and "line_9. You could imagine that the compiler handles the first 'foo' call by coercing x to X<'line_8, String>, which only requires a &'line_8 mut self, and likewise handles the second 'foo' call by coercing x to X<'line_9, String>, which only requires a &'line_9 mut self, and there would be no issue with the borrow checker.
Why doesn't it do this? Note that I'm not saying it should, I'm just genuinely confused about why in the formal model of type inference this isn't what happens.
N.B. I don't think the issue is that the selection of the type of x for the 'foo' call somehow happens before looking at the signature of 'foo'. Indeed, you can produce a functionally identical example using a function rather than a method, like this:
fn foo<'a, T>(x: &'a mut X<'a, T>) {}
...
foo(&mut x);
foo(&mut x);
...
When you look at it like that, the logic I'm suggesting seems even more reasonable: ordinarily, due to non-lexical lifetimes, we would see two mutable borrows in a row like that and just assume that the compiler knew to make sure that the first borrow ended before the second began.
TLDR: why do my constraints appear to PROMOTE the lifetime of the mutable borrow rather than DEMOTING the lifetime parameter in the type of x ???