What, *exactly*, are lifetimes, in the mind of Rustc?

Lifetimes in Rust are often compared to pointers in C and C++, both features tend to trip up new users of each language, and have a reputation for being difficult to wrap one's head around. When I learned C++, I was lucky enough to not have significant trouble with pointers, but the same is not true of Rust's lifetimes, which I am having significant trouble getting a precise and accurate mental model for.

This difficulty has prompted reflection as to why C and C++'s pointers were relatively easy for me, and the conclusion that I have come to is that I had a precise explanation of how they work early on (e.g. a description of what computer memory is, which gives a good understanding of what a memory address is, and pointers are just slightly glorified-to-fit-C++'s-type-system memory addresses). With Rust, however, I have not seen any such sources — most sources that I can find talk about Rust's lifetimes in human-centric terms, talking about variables being active "during" other variables, when, what I really need to properly understand them, is to understand what they are in the computer, or in Rustc's code and algorithms. During compilation, there is no concept of "during," only abstract syntax trees, LLVM IR, source code, and such.

As such, I ask, during compilation, what are lifetimes really, at the level of ASTs, etc.? If I wanted to go about implementing a similar system in a hypothethical compiler of my own, what would that look like?

Lifetimes are not the equivalent of pointers. Pointers are types and you can create pointer-typed values. Rust has many kinds of pointer-like types, the most obvious of them are references (&T, &mut T), raw pointers (*const T, *mut T) and smart pointers (Box<T>, Rc<T>, Arc<T>).

Lifetimes on the other hand are not types unto themselves (you can't create a lifetime-typed value). A lifetime is a property, or a component, of a type, and it is usually the most visible when dealing with references. However, all values and types have a lifetime.

Lifetimes don't actively do anything at runtime. They are merely annotations that the compiler uses for enforcing constraints, for example, which values you are allowed to use at what times, and how pointers to different values can be assigned and converted between each other. In this regard, they have a similar, albeit independent, responsibility as that of types proper.

For example, let's take the following code:

fn foo(x: &'static i32) {}

fn main() {
    let x = 42;

If you try to compile this, the compiler will generate an error because the lifetime of x is not 'static as it's a local variable, however the declaration of the function foo() requires that the value pointed to by its argument be 'static. It can do this because lifetimes are attached (either explicitly or implicitly) to each type and value in the program.

Sine you were asking about the level of the various intermediate representations in the compiler: a lifetime there is an annotation on the AST or the MIR nodes that describes at which points a value is valid. Formerly, a lifetime used to describe a lexical scope (what approximately corresponds to a block at the syntactic level), but nowadays the feature called NLL (Non-Lexial Lifetimes) allows a lifetime to specify a subspan of such a block.

1 Like

I think part of the reason for this is that unlike pointers, lifetimes don't exist at all at runtime. You can write your own rust compiler without implementing a borrow-checker (and indeed this has been done).

You should think of lifetimes as part of the type system. It's like in C++, if you have two unrelated types, Foo and Bar, you can't just store a pointer to one in a variable which is typed as a pointer to the other. But at runtime both pointers will just be stored as numbers, and there's nothing stopping you from overriding the compiler with a reinterpret_cast.

As part of the type system, lifetimes are a contract between you and the compiler: the compiler only looks at function signatures when it's type checking (it doesn't look at the code inside the functions you are calling) so lifetimes are necessary to relate function arguments and return values.

For example, in C++ you have the strchr function:

const char * strchr ( const char * str, int character );

To find out how to use it safely you need to read the documentation, where you'll find that it doesn't allocate a new string: it returns a pointer into the string you passed in.

In Rust, the function signature might look like this:

fn strchr<'a>(s: &'a str, char character) -> &'a str;

The lifetime 'a tells the compiler that the return value is borrowed from the argument s. The compiler uses this information when checking your code which calls strchr to ensure that you don't use the result after dropping or moving the thing you passed in as the s argument.

If you wanted to make a function which did the same thing as strchr, but returns a pointer to a brand new string (let's call it strchr_alloc) the signature might be exactly the same in C++:

const char * strchr_alloc ( const char * str, int character );

But in Rust. the signature would have to change, to something like:

fn strchr_alloc(s: &str, char character) -> String;

You can see now from the signature that the return type can outlive the argument that was passed in without any problems. You can also see that it's up to the caller to free that memory when it's done with it (in C++ you have to read the docs, or make sure to only use smart pointer types).


I'm aware that they're not the equivalent of pointers in the sense of being memory addresses, but there is a sense in which they are. It is common wisdom (even if this wisdom is somewhat misguided) that pointers are the hardest part of learning C and C++. It's also common wisdom that the borrow checker, lifetimes, etc. are the hardest part of learning Rust.

This part of your response is moving in the direction of what I'm confused about. An AST is different from a flowchart of finite state machine of where the program can be, yet lifetimes, which somehow operate on an AST (the MIR, as far as I understand, is just a partially compiled AST), seem to reason about program states and program flow, which isn't part of the information that is in the AST (it's easy for a human to infer one from the other, but that doesn't give me a formal understanding of how Rustc does it). How is this seemingly non-trivial problem solved? Given a fully-desugared AST, what is the algorithm that can determine if its lifetime use is valid?

MIR is a control-flow graph (CFG) of the program, not an AST. Lifetimes are determined by the points in this CFG where variables are live. You can see all the details of how lifetimes work in RFC 2094, but I don't think there are any document that provides a simpler summary.


I presume you mean the difficulty of understanding them has been compared not that they are in anyway conceptually the same.

As a C++ programmer you are well aware of what "life times" are in that language. For example consider:

int* someFunc(int a, int b) {
    int x;
    x = a * b;
    return &x;

As you know, the variable x in this function is local to the function. It comes into existence when the function is called (on the stack as it happens) and ceases to exist when the function returns. That is the lifetime of x.

As such you would never write such code in C/C++ because you know that whatever actual memory location x lived in will get recycled for something else when the function exists. That means the caller now has a pointer to a non-existent x. Bad things will happen eventually as the program proceeds and something tries to use that pointer.

Lifetimes in Rust are much the same. The difference being that you have to take care of it otherwise your code will not compile. Where as in C/C++ your will find yourself having to take care of it when your program mysteriously crashes or produces randomly wrong results.

Of course things get more complex quickly, when you have pointers to allocated data inside structures and so on. Luckily Rust will worry you about all that before you have to worry about it after your code is deployed. By way of example, today I learned of Adobe releasing patches for 32 security vulnerabilities. Most of which are down to life time issues. In C++ no doubt: Adobe issues patches for 36 vulnerabilities in DNG, Reader, Acrobat | ZDNet

Now, when it comes to the actual syntax and semantics of lifetimes in Rust this is about the best explanation with a real life code example I have ever seen: "Crust of Rust: Lifetime Annotations": Crust of Rust: Lifetime Annotations - YouTube Well worth watching.


I think OP has their terms confused and they mean reference when they say lifetime

  • 'a is a lifetime
  • &T is a reference
  • &'a T is a reference that is bound to the lifetime 'a
1 Like

My terms are not confused. When I say lifetimes, I mean 'a. What makes you think that my terms are confused?

Seems like I misunderstood what you were asking.

References are often compared to pointers, but lifetimes are not. C++ has no notion of explicit lifetime annotations. While references are more like fancy pointers that know how long they can live for.

1 Like

I attempted to clarify that point here:

1 Like

I your terms seem confused because 'a is a lifetime annotation. Not an actual lifetime of a data item. Rust data has the same lifetimes as C/C++ data. Even if the C/C++ syntax and semantics don't talk of such things.

This point is legitimate confusion; thank you for the clarification. This clarification, however, does not solve the underlying misunderstanding, althhough I think that the RFC2094 document that @jschievink linked will help, although due to its length it will take some time to get through to know for sure.

First, borrow-checking happens separately for each function in the program.

Whenever a value is borrowed (i.e. a reference to it is created), that introduces a lifetime to the function, and the resulting reference is annotated with that lifetime. The lifetime covers a certain region of code, and this region is going to be the smallest region that contains all uses of values annotated with that lifetime. If a value annotated with a lifetime 'a becomes borrowed, creating a new lifetime 'b, then the region of 'b must be a subset of the 'a-region.

Lifetimes can also be introduced with a generic parameter on a function or surrounding impl block. Those lifetimes are simply considered to contain the entire function, and unless specified with lifetime bounds, not considered to be subsets of each other.

To borrow check, do the following:

  1. When a value is borrowed, that value must be valid inside the entire region of the lifetime.
  2. Do ordinary type checking using the fact that &'a u32 and &'b u32 are different types.

Some important points:

  1. The lifetime is not the lifetime of the reference that was created. The reference is allowed to be destroyed before the end of the region. The lifetime is merely an upper bound on where it can live.
  2. The lifetime is also not the lifetime of the value that got borrowed. The value may (and usually does) live longer than the end of the region. The lifetime is a lower bound on where it can live.

This ensures that references stay valid because the reference never leaves the region, and what the reference points to stays valid in the entire region.

Finally a value can be borrowed in two ways: Immutable and mutable. Mutable regions may not overlap with any other regions originating from the same value.


Studying quantum mechanics is also hard, yet it is unrelated to pointers in systems languages.

Control flow graphs, SSA, abstract semantic graphs, and other kinds of program representations aren't magic. They have the same meaning, but lower-level representations tend to contain more explicit information about the particular kind of problem the compiler has to solve (e.g. control flow analysis on an SSA graph). This information is also present in the program in higher-level representations, it's just not explicit.

After all, the only input the compiler has is the source code. And, implicitly, the rules of the language hard-coded into the compiler. It then proceeds to enrich this almost unstructured representation through various levels of abstraction: ASTs, CFGs, three-address code, etc. Each pass of the compiler gets a hopefully appropriate data structure from the previous pass, on which it is easy to perform a given kind of analysis.

Or, in other words, in this sense, the AST doesn't contain less information than a fully-fleshed-out control flow graph; it just contains less data, which, however, is unambiguously reconstructible from the AST and the implicit rules of the language.

I assume you mean here that they are comparable in the sense that the programmer has somewhat similar obligations in how they have to reason about pointers and lifetimes, inasmuch as in both cases, one has to keep track of whether the pointer (at runtime) or lifetime (at compile time) is still valid.

The difference in difficulty, I think, is that in the one case one just has to write code where the pointer won't become invalid. But in the other, one has to write the code in such a way that not only is the lifetime of the reference valid, but the compiler can prove to itself at compile time that it's valid. And that's inherently a harder problem for a developer to reason about, because the compiler's requirements are very strict within a narrower context of "proof" than the developer can consider.


To me the source of trouble was misinterpreting Rust references as equivalent of a C/C++ pointer.

  • Rust's references are more like compile-time locks. They are not for storing data by reference. They are for temporarily pinning data to a scope and ensuring it won't leave that scope.

  • In C pointers are for distinction between by-value and by-reference. In Rust references are for distinction between borrowed vs owned. Passing by reference is an orthogonal issue in Rust, e.g. Box is also pointer and passes things "by reference" in the C sense.

    If you think of Rust references as "not copying" instead of "not owning", you have an incomplete mental model built around less relevant aspect of them.

Now, lifetimes:

  • They don't do anything. They're assertions, not instructions. So if Rust says something doesn't live long enough, the code has to actually make it live long enough, and the compiler can't make that happen for you (there's no GC, there are no implicit heap allocations).

  • Lifetime annotations exist to allow tracing backwards from any reference all the way back to the owned value this reference ultimately came from, even if the reference has traveled through several layers of function calls, variables, indexing and temporary structs. My mental model is that every Rust reference is always on a leash, and that leash goes through lifetime annotations back to the owner holding it.


I don't think that is strictly correct. The lifetime 'a on both the argument s and the return value tells the compiler that both must have the same lifetime.

Yes, you can make the inference you did in this simple situation, but in more complex situations, it would not be possible. The OP is trying, sensibly, to build a mental model of how the compiler deals with lifetimes, what it uses them for. I believe that what I said above is strictly correct. And a key part of my own mental model of Rust.

For example, that strchr could also return a string literal -- &'static str reduces to &'a str by variance.

Yeah, this is consistent with my description of the rules. When borrow-checking a function that contains calls strchr, both the argument and return value must be annotated with the same lifetime, thus the borrowed-from value is always valid while the return value exists.

In particular the return value is not a borrow from the argument — the argument can go out of scope before the return value does.

If strchr returned a shortened &'static str, then the rules are, in a sense, more strict than needed, but that's ok.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.