The lifetime of the implicitly created mutable borrowing is confusing

Consider this example

struct Test<'a> {
    v: &'a i32,
}

trait MyTestTrait {
    type Output;
    fn borrow_mut(self) -> Self::Output;
}

impl<'a> MyTestTrait for &'a mut Test<'a> {
    type Output = i32;
    fn borrow_mut(self) -> i32 {
       0
    }
}
fn main(){
   let i = 0;
   let mut m = Test { v: &i };
   m.borrow_mut();  // #1
   let i = m.v; // #2
}

I got an error at #2 for which the compiler says

79 |     let i = m.v;
   |             ^^^
   |             |
   |             use of borrowed `m`
   |             borrow later used here

First, the implicitly created mutable borrowing at #1 at best lives until the end of the invocation of the method, that is, after #1, it should be over, why does the compiler say that borrow is later used after #2? I cannot understand this point.

Second, rust says lifetime annotations do not change how long time the reference will live. In this case, from the diagnosis, the implicitly created borrow's lifetime seems to rely on the lifetime annotation. In other words, it seems the diagnosis would only be reasonable if the reference's lifetime was lengthened.

What's the exact reason this case is rejected by the compiler?

You are forcing lifetimes to be equal that shouldn't be forced to be equal. Try this:

impl<'a, 'b> MyTestTrait for &'a mut Test<'b> {
    type Output = i32;
    fn borrow_mut(self) -> i32 {
       0
    }
}

To be specific, you are saying that the following two lifetimes must be equal:

  1. The duration in which m borrows from i.
  2. The duration in which the mutable reference passed to borrow_mut borrows from m.
3 Likes

This is an anti-pattern in Rust, and is almost always wrong:

  • The ’a in Test<‘a> says that Test hold a reference to some external value with the lifetime ’a, and so cannot be allowed to exist beyond the end of ’a.
  • The ’a in &’a mut …, on the other hand, says that the target must exist until the end of ’a, so that the reference remains valid.

The combination of these two conditions, along with the exclusivity guarantees of &mut, means that creating an &’a mut Test<’a> will lock up the target until it is dropped (and sometimes be completely unusable).

5 Likes

In my mind, the lifetime annotation means all associated borrowings should at least long live enough as the smaller one of the lifetime annotation. In this case, the borrowing from i(&i) has a longer duration than the borrowing from m(& mut m). That is, a is viewed as the lifetime of the latter, the borrowing from i also satisfies the condition when a is that value. So, why does the lifetime of borrowing from m need to be considered as long as i have? Moreover, rust does say that the lifetime annotation does not change how long a reference lives. This is my confusion here.

I cannot understand the second part

says that the target must exist until the end of ’a , so that the reference remains valid.

Consider this one

struct Data<'a> {
    ri: &'a i32,
    rf: &'a f32,
}
fn main(){
      let i = 0;
      let ri = &i;
      {
        let f = 1.0;
        let rf = &f;
        let d = Data { ri, rf };
    }
}

Doesn't a in this case just mean both rf and ri should live longer than or equal to the lifetime a? Since ri's lifetime is longer than that of rf, so a, in this case, can roughly be taken as the lifetime of rf. Their lifetime needn't be equal but their lifetime all need to satisfy the common a, I think. Analogously, why doesn't the a in the first case not be taken as the lifetime of the mutable borrowing of m?

(Emphasis added.)

Once an inferred lifetime is calculated (via static analysis), or dictated (e.g. by a lifetime parameter), it doesn't change. They're not dynamic. However, they can seem to be that way for a variety of reasons:

  • Lifetimes can be flow sensitive (not relevant for the rest of this post)
  • Variance can allow reborrows or subtype coercions so that you have a different lifetime when, say, passing to a function or taking a subborrow.
    • And reborrows are automatic in some cases

But they don't actually change, even though we usually call this subtyping "shrinking the lifetime". When it comes to type declarations and argument signatures, lifetimes take on single "values" -- regions of validity. They don't mean "this lifetime or any shorter one". In particular, if you have multiple lifetimes that all use the same name (&'a mut Test<'a>, both references in Data<'a>), the lifetimes refer to the same regions. You have asked for lifetime equality by giving them the same name.

Another key thing to know is that lifetimes behind a &mut are invariant -- they cannot grow or shrink -- otherwise you could create memory unsafetly. That is, in

&'x mut Test<'y>

You can reborrow a shorter &'z mut Test<'y>, but you can't reborrow or otherwise get a different lifetime in the Test<'_>. Using any interaction through the &mut, it will be a Test<'y>.

So here:

fn main(){
   let i = 0;
   let mut m = Test { v: &i };  // Call this 'i, it has to be valid from here...
   m.borrow_mut(); // #1           |
   let i = m.v;    // #2           |__ to here, where you use it again
}

m is a Test<'i> and 'i has to cover #1 and #2. And when you call borrow_mut, it takes a &'a mut Test<'a>, and so

  • The lifetimes must be the same
  • It must be exactly 'i because of the invariance property
  • Thus you're taking a &'i mut Test<'i>

This means your exclusive borrow lasts through #1 and #2 as well. So m is still exclusively borrowed at #2 and you can't use it -- hence the error. This is why &'a mut Anything<'a> is an antipattern -- you can never use the Anything<'a> again. You can only use the &mut or things derived from it.


Lifetimes are also forward-looking: if you have a pre-existing lifetime 'x, and in some later code you create another borrow 'x, the second borrow doesn't have to extend backwards in "time". It just needs to be valid from the point of borrowing forward for the rest of 'x. So this example is pretty uninteresting on the surface:

fn main() {
    let i = 0;
    let ri = &i;                 // 'i
    {                            // |
        let f = 1.0;             // |
        let rf = &f;             // |  'f
        let d = Data { ri, rf }; // |__|___ both can end here (last use)
    }
}

The two borrows have to be the same, but they can both just be 'i here. It is not the case that 'i has to last for the entire scope of the ri variable -- i.e. past the inner block. Rust used to work like that years ago, but now we have something called non-lexical lifetimes (NLL). Now borrows only have to last through the last use of the borrow, so 'i can end just after creating d.

However, we can make the example more interesting by forcing 'i to last beyond the inner block:

fn main() {
    let i = 0;
    let ri = &i;                 // 'i
    {                            // |
        let f = 1.0;             // |
        let rf = &f;             // |  'f
        let d = Data { ri, rf }; // |  |___ 'f ends here (f is dropped)
    }                            // |
    println!("{ri}");            // |__ 'i has to last until here
}

Why does this still compile? d can have type Data<'f, 'f>, due to subtyping. The lifetime on &i is covariant, so it can be "shrunk" (when you put a copy of it into d, the copy's lifetime is coerced to the shorter 'f).

This would work even with mutable borrows; you would get an implicit reborrow when creating d instead of a copy.

2 Likes

Thanks. The last case in your answer is exactly the confusing part to me that it differs from the first example. You seem to convey a key point that

Another key thing to know is that lifetimes behind a &mut are invariant -- they cannot grow or shrink

This is the special rule that specifies why there is a difference between & mut Test<'a> and the last case, Right? In the last case, the lifetime of &i can be shrunk while & mut m is always invariant, whose lifetime cannot be shrunk. Hence, the lifetime parameter a in &'a mut Test<'a> cannot be inferred to t as if there were a temporary mutable reference created at the invocation of borrow_mut whose lifetime t was shorter than that of both m and i. Instead, because the rule says that the "lifetime of & mut T is invariant", hence the lifetime parameter a is always inferred to the lifetime of m in order to make sure the rule is satisfied. Is this the key point here?

Well, they're pretty different in general as one involves nested lifetimes and the other does not, but &'a mut Thing<'a> being a problematic anti-pattern can indeed be considered due to the invariance of the inner lifetime (where as you can often "get away" with &'a Thing<'a>, as both lifetimes are covariant, so they can both shrink to some common, sufficiently-short lifetime; on top of this, you can have multiple shared & Thing<'_> at the same time, while &mut Thing<'_> are exclusive).

Just be be very clear, with &'x mut Thing<'y>, 'y is invariant while 'x is still covariant. [1] But when they have to be the same -- when you say &'a mut Thing<'a> -- because the inner lifetime is invariant, the outer lifetime has to be invariant too. You have dictated that they must be equal, so because the inner lifetime cannot change, the outer lifetime cannot change either.

I would say the key points are

  • &mut is better thought of as being "exclusive" (vs. "mutable") -- no other accesses or other borrows (besides sub/reborrows) can overlap with it
  • The inner lifetime behind &mut _ is invariant -- you can't temporarily shorten it
  • When you tie lifetimes together, they must always be equal, so with &'a mut Thing<'a> in particular both lifetime positions become invariant and always-equal
  • So once you create a &'a mut Thing<'a>, you can never directly use Thing<'a> again -- the only possible access is via the &'a mut _, as it has exclusive access for the entiretly of Thing<'a>'s validity

The middle two points combine to make it impossible to "[create] a temporary mutable references [...] at the invocation of borrow_mut", and the first and last point results in the error when you try to reuse Test<'a> after creating &'a mut Test<'a>.


  1. Or more generally phrased, given a &'x mut T, 'x is covariant but T is invariant. ↩︎

3 Likes

Thanks. I see your interpretation. That is to say, the above complicated example is actually similar to the following case:

struct Test<'a> {
    v: &'a i32,
}

impl<'a> Test<'a>{
	fn show(self:&'a mut Self){

	}
}
fn main(){
        let i = 0;
	let mut t = Test{v:&i};  // ---------- the start of lifetime 't
	t.show();
	t;  // error
}   // ---------------------------------------- end of lifetime 't

Since the invocation of show requires a borrowing with type &'a mut Test<'a>, and we say the inner lifetime parameter in Test<'a> is invariant according to the rule you said, hence the inner lifetime parameter can only be the lifetime of t(that is 't), which is deduced from t. The lifetime of the mutable reference should obey the deduced 'a, that is to say, the lifetime of the mutable reference should be no shorter than 't. If the outer 'a were deduced to a lifetime value that is shorter than 't, 't would need to be covariant to agree on that value.

Hence, in order to make the lifetime of the implicitly created mutable reference to Test<'a> obey the lifetime of the deduced 'a(that is 't), the reference should have a lifetime as long as 't, hence in the extent where lifetime 't is active, t is not usable.

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.