What if I get Lifetimes Wrong?


#1

What if I get my lifetimes wrong? Assuming I can make the compiler happy, what is the consequence of a mistake? Do I leak memory? Do I free memory too slowly?

Thanks,

-kb


#2

If you get lifetimes ‘wrong’ then your code will not compile.

Do you have an example of what may be considered wrong?


#3

If your code builds, and correctly implements your application, then getting lifetimes wrong just means unnecessary clone() calls slowing your code and increasing its memory footprint.

As a neophyte myself, I have found that porting my code from Python to Rust makes for faster performance, a slower footprint, and better use of multiple cores, and I’ve yet to actually need to use named lifetimes.

That means I probably am cloning objects more than I should, but premature optimization is the root of all evil.


#4

It’s important to understand that lifetimes are descriptive not prescriptive. That is, you cannot extend how long something lives with a lifetime. So leaking memory in this way, or freeing it too late, is impossible.


#5

So if I get lifetimes wrong (but the compiler is satisfied) the cost is unnecessary data cloning?

-kb


#6

Unless you write a library. In that case it may well happen that code using your code fails to compile…


#7

Oh! Maybe I have it…

As I am pawing around in the weeds regarding lifetimes I keep feeling like this is something the compiler could, theoretically, figure out for me. Are you saying that, yes, it could, but only if it can see the whole program? If I am programming a fragment (a library) I need to say something about the code it can’t see?

RIght…?

-kb


#8

Sometimes there’s a design decision that the compiler can’t choose for you.

struct FooBar<'a>(&'a str);

impl<'a> FooBar<'a> {

    /// The returned string is valid for the entire lifetime associated with `FooBar`.
    fn foo(&self) -> &'a str { self.0 }
    
    /// The returned string is valid only while this &self is borrowed.
    fn bar(&self) -> &str { self.0 }
    fn explicit_bar<'b>(&'b self) -> &'b str { self.0 }
    
    /// Setter just to demonstrate borrows below
    fn set(&mut self, s: &'a str) { self.0 = s; }
}

fn main(){
    let mut foobar = FooBar("hello");
    let foo = foobar.foo();
    println!("{}", foo);
    
    foobar.set("world");
    let bar = foobar.bar();
    println!("{}", bar);
    
    // this will fail because `bar` is still borrowing it.
    // error[E0502]: cannot borrow `foobar` as mutable because it is also borrowed as immutable
    foobar.set("goodbye");
}

There are valid reasons to want behavior like foo() or bar(). Either way, the compiler will make sure that the lifetimes you wrote are actually possible to satisfy.


#9

Example for what @llogiq said (getting the lifetime slightly wrong in a library):

What’s interesting here is that changing the lifetimes is an incompatible change, so changing it requires a major version bump.


#10

important to understand that lifetimes are descriptive not prescriptive

Wow, that iis a total revelation to me. That should be pointed out in the book! Thanks!


#11

I think I am understanding details of lifetimes (more or less), but I am trying to pin down the big-picture. (I do better working out smaller problems when they are motivated by the bigger picture.)

From what I can tell: There is no theoretical reason for why the compiler couldn’t figure out the lifetimes for us, as long as it can see (and analyze) all of the code in a program.

The problems seem to be at least one, maybe two:

  • The compiler can’t see all of the code when building a library, so lifetimes are a preview for how the library will be used in the future, how it will be used by other code.

  • As a practical matter, even if there is no library confusion and the compiler can see all the code, maybe the current compiler doesn’t want to do that global analysis; it might be a nasty problem inferring all the elided lifetimes.

A “benefit” of lifetimes is they help programmers reason about lifetime issues, and communicate back and forth with the compiler about this reasoning. (Yes, most every newbie will find this assertion a stretch, like being told how wonderful it is to eat terrible tasting vegetables.)


Proposed axiom: If the compiler is happy with the lifetimes specified for a completed program, those lifetimes are–necessarily–both correct and optimal.

The code might be buggy or stupid (the lifetimes might help explain any stupidities, see comment about vegetables above), but the lifetimes themselves are not the source of the stupidity, they are neither the source of bugs nor performance problems. They are an odd beast, closer to being documentation than source code. (As steveklabnik wrote “lifetimes are descriptive not prescriptive”).

Or, put another way, my subject “What if I get Lifetimes Wrong?”, kind of isn’t possible. The lifetimes are not where “wrong” is kept. (Library issues not withstanding. Rust makes program-wide guarantees, lifetimes are needed to work on program-fragments, useful even when not writing a library.)

Is this correct??

Thanks,

-kb


#12

Well, the thing is this. Let’s say that you have this function:

fn foo(x: &i32, y: &i32) -> &i32 {
    y
}

Rust could determine that this works:

fn foo<'a, 'b>(x: &'a i32, y: &'b i32) -> &'b i32 {

and compile it. But what if that’s not what you wanted? What if that’s a bug? What if it should be

fn foo<'a, 'b>(x: &'a i32, y: &'b i32) -> &'a i32 {

?

It’s impossible for the compiler to determine what your intent was.

And yeah, maybe making the wrong choice here still wouldn’t compile, but that would lead to errors that aren’t actually errors: they’d be other things that are right that the compiler now thinks are wrong.

As such, function signatures are the primary way that you tell Rust what you want things to be. Everything else kinda flows from there. This is also why we don’t do inference on them either; it’s the same problem, but with types.


#13

I think I figured it out. Please tell me if I am wrong.

The “regular” (prescriptive) parts of Rust source code is what specifies all the behavior for a Rust program: bugs or not, clever or stupid. That’s it.

Lifetime notations make no difference in program behavior, efficiency, memory use, etc. None.

Lifetime notations are completely superfluous–except for the fact that they are completely crucial.

There is a contradiction here! Rust knows more about data lifetimes than we do, yet it wants us to tell it about data lifetimes. What the heck is going on here?

The answer: there is a rat’s nest of complexity involved in debugging data usage in big programs (say, multithreaded C code), but it didn’t magically vanish because Rust was invented. It is all still there, but it was moved out to compile time.

Oh my God! What a scary thought.

Is that why the Rust compiler has a reputation for being ornery and difficult? Actually no, the Rust compiler is a magnificent paragon of simplicity and ease of use. At least when compared with some alternative Rust designs they might have tried to build.

One of the secrets in the design of Rust is specifying data lifetimes. While they don’t actually do anything, they help organize our programming efforts. With lifetimes, not only is it possible to write libraries and know they will work with code that is yet to be written, they also make it possible to work on one source file at a time. They help us write a program that isn’t yet complete.

When programming in Rust we will make mistakes, but by specifying lifetimes the Rust compiler can partition our mistakes. Are the lifetimes we specify in one place compatible with the lifetimes specified somewhere else? A big global reconciliation meets through the lifetimes we specify. And a local reconciliation is also organized via data lifetimes: is our mutable-this and referenced-that all consistent within this file?

Without explicit lifetimes (elided or not), the compiler would have to do far more work, and confront us with far more detail regarding each error.

With lifetimes, programmers can (mostly) reason about their code one file at a time. Lifetimes are a crucial part of the API contract between files. There is great organizational value in that.

Tentative advice, from one newbie to other newbies: Lifetimes are confusing, but don’t sweat that your lifetimes might do the wrong thing. Because they don’t actually do anything. They are just there to help coordinate the stuff that does do things. They do need to be correct, but once they are correct you will know it. Very vaguely, they are like C include files, which also need to be correct. But, unlike C includes, if you have built your whole program, and if the compiler is happy, then your lifetimes are also correct! (No way is that true with C.) They can be confusing, but they are a great help.

Comments? How much did I screw up?

Thanks,

-kb


#14

Someone must know whether I am wrong, right, or somewhere in between…

-kb


#15

This pretty much sums up my feelings.

Similar things are true for the trait bounds, to a lesser extent; Very few functions directly need them except those that e.g. call a method (in which case they are necessary for unambiguous name resolution). (I know this is a big oversimplification, but it fits most of my mental model, leaving aside things like Copy)

While it is a bit of an annoyance sometimes (and can make refactoring a chore) that trait bounds get propagated everywhere, their greater purpose lies in the ability that you can reason locally about any piece of your code, without having to worry about how it will be used.

(But in comparison to traits, this locality of reasoning is the only purpose of lifetimes, to my understanding)


#16

The one other thing about lifetime annotations is that they are needed to make unsafe code isolated. For example, Vec’s methods need to specify that the references it gets from its internal raw pointer cannot outlive the vector.


#17

I have seen cases lifetimes can get code to compile where otherwise we would have to make an extra scoping block, I can imagine similar things with unsafe code. Clearly there is code that will not compile without lifetime notations.

But I am guessing there are not cases where two different versions of lifetimes would give two different functional behaviors, I am still clutching to the hypothesis that lifetimes are descriptive and not prescriptive…true?

-kb, the Kent who would love definitive confirmation either way.


#18

Thanks for the traits bounds mention!

Yes, this seems key to understanding why they are and why they must be.

-kb, the Kent who is still tentative that this really be true.


#19

Lifetimes never cause functional changes. The worst-case scenario for unsafe code is that wrong lifetime annotations permit “safe” code that shouldn’t be able to compile. Only unsafe code can do that, because unlike safe code (which always gets a reference with its own already-baked-in lifetime), unsafe code can create a reference from thin air. So if it returns the reference to safe code, it needs to communicate how long it’ll last.


#20

My impression is yes. Lifetimes, like other trate bounds, are a form of documentation; but a form checked by the compiler. A fully inlined program, that is all in main with no abstractions, needs very few type hints and no lifetime hints. When we make an abstraction the compiler requires us to document the types/lifetimes we expect. It then guarantees us that that documentation is correct, and that it is only used as documented.

Sometimes we document a more accepting abstraction than we intend, in that case the compiler tells us that we have a lifetime issue in our funktion. Other times we have made a more restrictive rule than we needed. If our code compiles than we have followed our documentation, at both ends.

std had a non optimal lifetime for binary_search_by changing it made allowed more code to compile.