I don't able to understand lifetime specifiers :(

A lot of your examples are talking about how lifetimes work within a function body, and how that interacts with lifetimes parameters on structs. How lifetimes and borrow checking more generally works in function bodies can be complicated to explore in depth, as it's almost entirely based on inference. Your code implies certain constraints between lifetimes (like 'a must outlive 'b, for a couple lifetimes which are frequently inferred and not even actually nameable), and the compiler also checks the use of every borrowed value to see if it conflicts with the inferred active lifetimes.

From reading your question and reponses, I think there's some confusion between these inferred lifetimes, lexical scopes, and how it ties in with lifetime annotations on data structures.

Let me modify the code in your OP and point some things out. The first thing I'm going to do is to switch from using Copy types like i32 to String, where the ownership semantics are more clear. (I won't deal with copies in this post, just moves.)

fn main() {
    // these variables are *declared* in scope1
    let v1 = String::from("10");
    let v2 = String::from("20");
    let v3 = {
        // this one is declared in scope2
        let v4 = String::from("30");

        // This moves `v1`'s value into this scope
        let v5 = v1;

        // v1 is no longer usable, it got moved out of
        // v2, v4, and v5 are usable here
        println!("{v2} {v4} {v5}");

        // This moves from `v4`; the value will then be in `v3`
        v4
    }; // v5 (containing the value originally in v1) drops here

    // v2 and v3 still usable
    println!("{v2} {v3}");
}

Just because a variable was declared in a given scope doesn't mean that it will be usable for the entire scope, or that the value in the variable can't move moved into a different scope.

When you move a variable, like

let v5 = v1;

Then any borrows of that variable can no longer be valid. This can happen anywhere in your code, like in the middle of a block. If v1 had been borrowed in &'r1 r1, the lifetime 'r1 could not be active when the move happens -- if something implied it needed to be, you would get a borrow check error. So in a valid program, 'r1's lifetime must end before that move -- even if that means the lifetime ends in the middle of a block. The general term for this analysis is non-lexical lifetimes (NLL).

For example:

fn main() {
    let v1 = String::from("10");
    let r1 = &v1.as_str(); // &'r1 str ------------+
    {   //                                         |
        // 'r1 must still be valid here            |
        println!("{r1}");  //                      |
        //                                         |
        // 'r1 can't be active when `v1` moves +---+
        let _v2 = v1;

        // If you try to use it afterwards, you'll
        // get a borrow error
        // println!("{r1}");
    }
}

'r1 started in the outer block, was valid for the first line of the inner block, but isn't valid for the entire inner block and isn't valid in the outer block after the inner block either.

You could remove the inner block and the analysis would be exactly the same in this case.

The main interaction between lexical scopes (like these nested blocks) and lifetimes is that when a variable goes out of scope at the end of a block, it counts as a use just like moving the variable does. So the scope a variable is declared in puts a cap on how long a borrow of that variable might be -- at the end of the scope, you'll either be moving it to return it to a different scope, or you will have already moved it, or it will implicitly go out of scope and drop. But this is an upper limit for the lifetime. The actual lifetimes are all inferred by

  • How the borrows are used (e.g. printing r1)
  • Inferred or explicit constraints between lifetimes (like 'a: 'b)

Where do these other constraints come from? Some are based on how the borrows are created:

let a = &value;  // 'a
let b = &*value; // 'b

Here b is a "reborrow" of *value through a: &'a _, and the reborrow comes with the constraint that it can't outlive the original borrow, so 'a: 'b is implied.

But constraints come also result from what functions you call or what structs you use.

So consider this:

struct Same<'a>(&'a str, &'a str);

fn main() {
    let v1 = String::from("10");
    let v2 = String::from("20");
    let same = Same(&v1, &v2);              // Same<'same>
    let Same(r1, r2) = same;                // (&'same str, &'same str)
    let _v1 = v1;                           // 'same can't be valid anymore

    // Borrow check error for similar reasons as the last example
    println!("{r2}");
}

When we create same:

    let same = Same(&v1, &v2);              // Same<'same>
    //              ^    ^

Our definition says that these two lifetimes have to be the same lifetime. That means that if one of them becomes invalid, they both become invalid. The move of v1 means r1 can't be valid any more, and due to the constraint Same<'a> imposed, r2 can't be valid either. Hence the borrow check error.

But if we allow them to be different:

struct Diff<'a, 'b>(&'a str, &'b str);

fn main() {
    let v1 = String::from("10");
    let v2 = String::from("20");
    let diff = Diff (&v1, &v2);             // Diff<'r1, 'r2>
    let Diff(r1, r2) = diff;                // (&'r1 str, &'r2 str)
    let _v1 = v1;                           // 'r1 can't be valid anymore

    // But `'r2` can still be valid -- there is no constraint that the lifetimes
    // are the same anymore
    println!("{r2}");
}

This version compiles because we've removed the problematic constraint.

If you make this change:

-struct Diff<'a,     'b>(&'a str, &'b str);
+struct Diff<'a: 'b, 'b>(&'a str, &'b str);
+//            ^^^^

You will again get an error, because the lifetime of 'r1 has now become an upper limit for the lifetime of 'r2.

That being said, we have to admit that this is a relatively artificial construction. A single lifetime is usually sufficient for a struct that has only shared references like Same or Diff, because

  • it is covariant in the lifetime -- the lifetime can be "shrunk" automatically
  • references are automatically reborrowed in many places, and those can shrink too

For instances, here's the Same example modified to work. (Like we said, it can be complicated to explore in depth.)


Hopefully that lends some insight on how lexical scopes aren't the end-all and be-all of lifetime inference, and how different lifetime declarations on structs can lead to different constraints being inferred by the borrow checker.

Just like having one or multiple lifetimes can specify different requirements for a function signature, having one or many lifetimes parameters on a struct can specify different requirements for its use. The borrow checker will infer constraints based on those requirements.

If the compiler can't infer lifetimes that satisfy all the use sites and all the lifetime constraints, it issues a borrow check error. The constraints created by how you choose to declare your struct are instructions to the compiler on how it should determine which programs compile or not.

7 Likes

I'm still not quite with you there. Lifetimes are not just in my head. They are a physical reality when the program runs.

As my program runs during some time period some area of memory is considered to hold some meaningful data item. At other times that same area of memory may well be used to hold some other, different, meaningful data, or nothing meaningful at all. Clearly then using that memory as if it were my meaningful data only makes sense for some period of time, the data items lifetime. And of course any references to that data item only have meaning during that time.

Hmm... One could say the same about types and type checking. If I have a perfectly formed, working, program pretty much all those type ascriptions are redundant. If my code contains ´x = y + z` where x, y and z are all the same type that supports addition and my code always uses it correctly then why would I need all to specify types all over the place? Why bother with having the compiler do any type checking? They don't change the way the code works.

I'm sure you would not consider type specifications as mere comments to your code. They may not change how your program works, it will work the same even if the compiler forgot to do any type checking. What they do is put limitations on what you can write, they stop you making stupid mistakes like passing a string to a function that expects an integer.

Life times are an extension of types. Now you get to say not just what a thing is but when and how long it is valid as that thing. Life time ascriptions can't change your data's lifetime, but they do stop you writing code that uses things outside its lifetime.

This far from being mere comments in the code. Features like these in high level languages are often not about enabling some magical super power for the programmer but rather limiting what the programmer can write in an effort to a) Increase the odds that the code actually works, b) increase the odds that the code is understandable.

3 Likes

I think of lifetimes in the terms of type erasure -- the effects of lifetimes are still there in the binary, but they aren't actually there in the sense that one can with certainty detect their presence. If someone is asked to disassemble a piece of code from a (safe) Rust program they could just as well come to the conclusion that it was compiled from a well written C program.

(For the sake of the argument I'm completely ignoring all other fingerprints like symbol names, language constructs that don't specifically involve lifetimes, etc).

1 Like

Nope. Types directly affect your program. If you write a + b then what assembler output would be produced depends on types you had.

Seriously? Then what assembler code would be produced by that function:

fn foo(x, y) {
  x + y
}

You need to know types of x and y to meaningfully answer that question.

You don't need to do that. Rust is perfectly happy with this:

     let x = y + z;

But you have to specify some types because not all of them can be inferred. If you have bitpattern in y equal to 0x3f800000 and z equal to 0x3f000000 then result can be 0x7e800000 or 0x3fc00000 depending on whether these are looking on i32 or f32.

Except they do. And quite significantly in some cases.

Have you seen my other message here? Yes, I do consider type specifications in traits as “mere comments”… and extremely annoying ones, most of the time. Although useful in a few cases.

You can not remove all types but you can remove these extra markups very often and with very good effect for code readability. C++ and Zig are doing that just fine.

They do affect how code works, sorry. I have shown that above. And these places when they don't affect how code works... yes, I'm often annoyed Rust asks me to do that redundant work. Especially when it makes me repeat the same thing again and again for no good reason.

Yes. And I hate that. I'm perfectly happy with types which directly affect generated code. You, somehow, ignore these completely.

If they are not comments in the code, then show me an example, please. Here's the one for types:

pub fn foo(x: i32, y: i32) -> i32 {
    x + y
}

pub fn bar(x: f32, y: f32) -> f32 {
    x + y
}

These two functions are completely identical, types are the only difference. Yet foo is translated into 0x8d x04 0x37 0xc3 while bar becomes 0xf3 0x0f 0x58 0xc1 0xc3.

Show me something like this with lifetimes and we may go from there.

And that's the critical difference. Types directly affect generated code. Yes, some operations may produce the same binary output. E.g. signed and unsigned types often produce the same bit sequence for x + y yet would still produce different ones for x < y.

If you program only adds and multiplies some 32bit sequences in memory you would never distinguish between i32 and u32, but if division or comparison is involved then you can.

This means types are not “mere comments” (although restrictions placed on types in traits are “mere comments” and very annoying ones in many cases).

Lifetimes, on the other hand, are never changing anything in the output. The most you can achieve is to make compiler accept or reject your code, but it's output is fully determined by the code without lifetime sigils.

At least that's the situation in Rust today. There are some talks about allowing one to treat 'static differently from 'a, but AFAIK it haven't lead to anything just yet.

I thought I would mention that lifetimes can affect the generated code. At least if higher ranked lifetimes are used. See for example this Rust Playground that makes use of std::any::TypeId.

I do?

Hypothetically speaking, if my program contains that foo() function then the compiler could infer the types based on how my program calls it. It could generate a foo() for the cases where I call it with integers, it could generate a different foo() for the cases where I call it with floats. And all manner of other foo()s for different types, perhaps using my own definition of + for them. Perhaps even catering for x and y of different types. In which case type ascriptions for x and y here become redundant.

Seems to me that lifetime ascriptions have a huge effect on the generated code. A simple example:

    let some_ref: &i32;
    {
        let x = 42;
        some_ref = &x;
    }
    println!("{}", some_ref);

In order to make that code work the compiler would have to include some means of ensuring that x stays around long enough for the some_ref to still be valid. The solution to that is garbage collection. As in Javascript for example:

    let some_ref;
    {
        let x = {a: 3, b: 4};
        some_ref = x;
    }
    console.log(some_ref);

Which Rust specifically does not want. Hence the need for lifetimes and lifetime ascriptions.

Of course the other solution is to ignore the problem and allow dangling references and such, let the programer debug it later, as in C and friends.

Nice correction. Note that it doesn't work with mrustc. It declares these two types equal. And rust language specification is not precise enough to say what should happen.

Indeed, current rustc implementation sometimes leaks such differences into runtime, but it's still not clear whether it's a bug or a feature.

Generates similar code to what what you get in C or C++ if you use Rust compiler which doesn't verify lifetimes (like mrustc). Your point?

Nothing of the sort. Compiler need to just not include borrow checker and everything would be “fine”. For some definition of “fine”, of course.

Yup. That's precisely what mrustc is doing. And that's valid way of compiling Rust programs.

Yes, if your compiler ignores lifetime specifiers then your whole code becomes unsafe (in Rust terms), it's now your responsibility to ensure that it's correct, but it's still possible to compile everything and, more importantly, correct Rust code is supposed to work identically in a both modes (curious cases with HRBTs are currently underspecified and it's unclear if it's valid to depend on TypeId to distinguish Foo and Bar in @Lej77 case).

Maybe eventually it would be declared that mrustc is non-conforming compiler and actual “valid” Rust compiler can not ignore lifetimes in HRBTs. Even if that would happen it would still be same as with public/protected/private specifiers in C++: these, like Rust's lifetimes shouldn't affect the generated code but there are nasty corner cases where they do that).

In C++ it's not even hypothetical. But even you write such code and code accepts in in runtime you would get two or more implementations for such functions. That means that types are not optional. They affect the generated code.

Lifetimes, other other hand, are like private/protected/public specifiers in C++: they may make your program compileable or noncompileable but [except in some rare and very unusual corner-cases] they can be ignored both by reader and the compiler.

They don't affect program semantic, rather the implication goes in the other direction: they have to describe that semantic well enough for the compiler to believe that your program is worth accepting.

They don't become redundant, they become implicit. They are still there, reflected in the code generated. Even if they no longer in the source code.

Litetimes, on the other hand, are ignored by mrustc completely… and yet it still can compile Rust code.

You need to have different types for

type Foo = for<'a, 'b> fn(&'a u32, &'b u32) -> &'a u32;
type Bar = for<'a, 'b> fn(&'a u32, &'b u32) -> &'b u32;

Or you could type erase the former to dyn Any and then downcast it to the latter and cause a dangling reference in safe code.

(This example without a return value is more subtle as you can soundly call one in place of the other, so they should arguably be the same type.)

1 Like

The big qiestion is whether you can use that to, somehow, cause unsoundness, somehow.

Note that even if your code would rely TypeId for these two types being different it would still need to actually use that difference.

And because you can only discover that difference at runtime this would mean that both sides of that if must accept both types.

To ensure that lifetimes actually affect the generated code you need to invent something that would affect something statically.

It's unclear whether this is achieveable or not. And without that this example is closer to comparison of function addresses (which may return true or false depending on how your crates are processed and what compiler options are used).

That's const fn typeid which would currently be unsound because the compiler doesn't agree with itself on which higher-ranked types are equal yet.

(I would view this as types affecting generated code and not lifetimes per se. The types in question are higher-ranked over lifetimes, but there are no actual inferred lifetimes involved; it's the constraints between the embedded lifetime parameters that determine the type, not any concrete lifetimes.)

Anyway, this conversation has strayed pretty far away from helping the OP. Sorry about that.

My point is that my enamplecode is a simple case of:

  1. Create an object. x in this case.
  2. Create a reference to that object, some_ref.
  3. Dispose of the object. By simply exiting the inner scope in this case.
  4. Using the reference to the now non-existent object.

In order to prevent something bad happening here the compiler has to keep the x in existence outside the scope, until the reference disappears. That requires adding allocations and a garbage collector. Or the compiler just says "no" and refuses to compile such code. Which brings us to the world of lifetimes in Rust. Or the compiler could just compile it and let the programer debug it later.

The up shot is that if we want robust code lifetime rules and checking change the generated code enormously as no allocations and garbage collector are required.

I can't tell what you are getting at there. Not including the borrow checker would leave us in the same situation as C, which is totally not fine.

I'm not sure what mrustc has got to do with it. As far as I can tell mrustc does not check lifetimes, it will happily compile the use of dangling pointers. As it says "it tends to assume that the code it's compiling is valid" as such it is not a complete Rust implementation.

Yes, yes, the types become implicit and still exist in the generated code. Which is why I said the type ascriptions become redundant.

Again, as I understand mrustc also complies non-Rust code. Stuff that actual Rust would reject.

Sure, but why is that relevant? You may consider it “extended” Rust.

  1. It compiles correct Rust programs correctly.
  2. And it also compiles some programs which are not valid Rust programs.

The important part is #1: if you can correctly compile valid Rust programs by ignoring lifetime specifiers and lifetimes then said lifetime specifiers and lifetimes are optional extra for the language, they are not affecting the generated code.

Some type specifications are, indeed, redundant. Not all. But types are not redundant. They affect the generated code. Always. You can not generate code which adds two variables if you don't know which version of “addition” is supposed to be used.

Lifetimes are redundant. Compiler have it's own similar-yet-different notion, but that one is not even exposed to the compiler user.

By that reasoning we have no complete rust implementations whatsoever. Because regular Rust compiler also have a tool which can make compiler “assume that the code it's compiling is valid”: pointers.

It's just a living existential proof of my core assertion: lifetimes and lifetime specifiers are not affecting compiler, they are not affecting the generated code, they can be ignored if you discuss about how program would work.

They exist solely to reason about code. Like comments but compiler knows how to read and verify them too, not just humans.

And adding borrow checker to C with Sparse-like annotations would make C as safe as Rust WRT dangling pointers without any changes to the C compiler. And no, that's not theoretical handwaving, that was actually done.

I'm not saying that lifetimes are not important and that we should abandon them, I'm just saying they are separate, optional part of program. They are not affecting code at all (at least till now, see @quinedot clarifications), they exist solely to make sure code can be verified by compiler.

The upshot is that for safety we need proof of correctness if we want to use low-level language with RAII-style allocations and deallocations.

But verification of that proof of correctness is separate from program compilation. Sometimes you have separate tools (Ada has SPARK, seL4 used C and some ad-hoc scripts and Isabelle), sometimes they are tightly integrated into the language core (Rust).

But they are still separate, they don't directly affect the generated code (although there are discussions about allowing them to do so, but even then it would be rare corner cases).

No, I consider mrustc to be a "lesser" Rust. It's got stuff missing. If it is not enforcing the rules of the Rust language definition how can it be an extension?

I know what you mean and I agree. But my point was a little bit different. If Rust did not have that lifetime checking, and if we wanted it to compile to code that did not randomly crash or produce garbage, then we would need to implement something different to handle dangling references and such. Like using allocations and a garbage collector. Or some kind of run-time checks. That would be very different code.

Quite so. I said as much in a previous post.

I don't see how Rust defining an escape hatch to unsafe code and even assembler makes an implementation of that incomplete.

Again, true. Except if we want robust compiled code something other than lifetimes would have to be used which would generate different code. As above.

That sounds great.

However let's not confuse a language definition with an implementation. C is a language defined by an ISO standard. C is not gcc. C is not clang. Or any other compiler plus proof check one comes up with.

I guess so. But if I want proof checked code I need to use the verifier and the compiler. Might as well call the whole thing my "compiler". Presumably they could be welded into a single program anyway. Of course the input language is no longer C, or whatever, because I need to add those annotations.

Easy: by the very definition of “language extension”. The set of programs accepted by mrustc is strictly larger than set of programs accepted by rustc. that's “language extension” by any sane definition.

Because this breaks your assertion that if compiler “tends to assume that the code it's compiling is valid” then it's not “not a complete Rust implementation”. Normal compiler assumes that code marked as “unsafe” is valid (even if it couldn't verify it), mrustc does the exact same thing WRT the whole code it compiles.

The scope of trust placed on developer differs, but the consequences are the exact same: unpredictable result if rules are violated.

Note that not all programs accepted by mrustc and rejected by rustc exhibit undefined behavior. Some are actually correct, their correctness just doesn't match what you may achieve with lifetime specifiers.

Now please read what I wrote:

The Rust language contains two [almost entirely] independent parts:

  1. Low-level C/C++ like language with pointers and other such things which describes to the compiler how data must be moves, what assembler sequences have to be generated and so on.
  2. Lifetime specifiers, special rules and that infamous borrow checker which makes large part of code written in that language described in part #1 actually safe.

And #1 and #2 are, to a very large degree, separate entities. You can take #1 and use it separately (that's mrustc, more-or-less and gccrs currently is in the same category) or you may add #2 to some other language (again, not a theoretical assumption AdaCore guys did precisely that with Ada).

The important thing is not that you may mash them together into some massive thing and call it “compiler”. As the well-known RFC 1925 says: it is always possible to aglutenate multiple separate problems into a single complex interdependent solution. The important thing is that you may separate #1 and #2!

That's important because it has many implications.

If you understand that lifetime specifiers are not instructions for the compiler but optional proof of correctness then it immediately exposes attempts to tweak the lifetime sigils to convince compiler to keep something around (as novices are trying to do) as useless: compiler itself, that #1 entity that actually produces machine code doesn't even look on your lifetime specifiers (except in HRBTs and it's yet not even clear what it should do there), you program have to correct first and then you may try to add enough lifetime sigils to make compiler know about it's correctness.

It also explains why correct program may be rejected by the compiler: compiler doesn't verify if your program is correct or not, it checks the proof of correctness attached to it!

And types… types can not be separated in similar way. Compiler have to know your variable types or it wouldn't be able to generate correct code.

Types are optional addons in some other languages. Like Python (with type annotations) or TypeScript (although there are similar corner cases to Rust's HRBTs). But not in Rust. In Rust types are very much part of the language proper, you can not push them into separate module and compile program without looking on types of variables. This just wouldn't work in Rust.

Dated, but this thread is too fun.

pub mod mymod {
    // type is &'static str .. e.g. a C-like pointer; will never be dropped
    pub fn foo() -> &'static str { "hello world" }

    // a head pointer+length to an object
    pub fn bar() -> Box<str> {
        let s:String = foo().to_string();
        s.into_boxed_str()
    }

    fn rand() -> bool { true }

    // we return the SHORTER lifetime of 'a and 'b (notice 'a + 'b)
    pub fn pick<'a, 'b, 'ab: 'a + 'b>(a_ptr: &'a str, b_ptr: &'b str)
        -> &'ab str
    where
        'a: 'ab,
        'b: 'ab,
    {
        if rand() { a_ptr } else { b_ptr }
    }

    // sub-pointer who's life matches caller (panics for clarity)
    pub fn baz<'a>(str_ptr: &'a str) -> &'a str { &str_ptr[1..] }

    pub struct Data<'a, 'b> {
        pub a: &'a str,
        pub b: &'b str,
    }
} // end module

fn main() {
    use mymod::{foo, bar, baz, pick, Data};
    'a: {
        let foo_ptr: &'static str = foo();
        let bar1: Box<str> = bar();
        let bar1_ptr: &str = bar1.as_ref();
        'b: {
            let bar2: Box<str> = bar();
            let bar2_ptr: &str = bar2.as_ref();
            let picked1_ptr: &str = pick(foo_ptr, bar1_ptr);
            let picked2_ptr: &str = pick(picked1_ptr, baz(bar2_ptr));
            let data = Data { a: picked1_ptr, b: picked2_ptr };

            let mut outer_x = 5;
            let answer = std::thread::scope(|s| {
                s.spawn(move || {
                     // notice we can borrow across thread boundaries, and ASSIGN across thread boundaries
                     outer_x = 6; // totally unrelated, but cool to see
                     pick(data.a, data.b)
                }).join().unwrap()
            });
            println!("answer = {answer}, {outer_x}")
        } // data and bar2 get dropped
    }
}

(( edited ))

So here, I'm randomly picking from strings in different life times, and I'm passing across thread barriers (and I haven't compiled this code so I might have a bug or two), but notice, to the human, no data is used passed it's drop point; but the compiler can't prove that. More importantly, since this is a public API, the coder doesn't have the ability to write code that a compiler can prove doesn't violate lifetimes. Notice also that I desugared all the lifetimes; see how noisy those bastards get, lots of chained scopes. Auto populating the template lifetimes is a life-saver; believe you me.

I could, however, be EvilCrashCorp, and write the following.

fn consume<T>(rx: Receiver<T>) { ... }
fn main() {
   use mymod::{bar, Data, pick}
   let bar = bar()
   let bar2 = bar()
   let data = Data { a: pick(&bar, &bar2), b: pick(&bar, &bar2) }
   let (tx, rx) = mpsc::channel();
   tx.send(bar); // no compiler error, bar gone, but so is data, data dropped!!
   println!("bar2={bar2:?}"); // perfectly fine
   thread::spawn(move || { consume(rx) })
   // println!("sent {data:?}"); // if you uncomment, code will NOT COMPILE
} // drop bar2

So through the magic of lifetime specifiers, the compiler can trivially look JUST at function signatures and see if you have a logically consistent flow.. If you did NOT have lifetime signatures, rust would have no way of knowing that last line is an error that would crash your code (e.g. all of C/C++ would allow that last line, and you'd have a race-condition whether you COREDUMP'd or not). Consider this example for at least 10 minutes if you don't get it. I'm NOT an expert, but there are hundreds of real-world examples with complex tit-for-tat exchanges of pointers.. Rust allows for this with a LOT of contract-coding (like the lifetime specifiers). And the compiler can trivially and efficiently verify the code can not corrupt memory via double-free, use-after-free (when mem returned to OS and memory address is unmapped / COREDUMPable) or failure-to-free (except for the std::mem::forget - which just leaks memory as a feature to avoid transitive unsafe markers everywhere)

Happy coding...

You've got a lot of syntax errors in that code sample. Here's a cleaned-up version.

There's so many compilation errors in the code I don't know that it's going to be productive to analyze, but let's start with

// we return the SHORTER lifetime of 'a and 'b (notice 'a + 'b)
pub fn pick<'a, 'b>(a_ptr: &'a str, b_ptr: &'b str) 
   -> &'a + 'b str 
{ 
   if rand() { a_ptr } else { b_ptr } 
}

First of all, 'a + 'b in that position is not valid syntax.

Second of all, 'a + 'b is a union of the two lifetimes, not the smaller of the two lifetimes.[1] But what you really need here is the intersection of the two lifetimes,[2] which would be spelled like so:

pub fn pick<'a: 'c, 'b: 'c, 'c>(a_ptr: &'a str, b_ptr: &'b str) 
   -> &'c str 

If you try with 'c: 'a + 'b instead of 'a: 'c, 'b: 'c, you will see that neither 'a nor 'b can coerce to 'c (meet the 'a + 'b bound).


Then down in main there is maybe the same confusion?

  let picked1_ptr: &'a+'static str = pick(foo_ptr, bar1_ptr)

'a isn't nameable (and scopes aren't lifetimes to boot), and the + is again not valid syntax; anyway this can't work because the arguments to pick are references to local variables, so they can't be 'static. (The union of 'static with any other lifetimes is 'static.)


Given how many other errors there were to clean up before it was something the compiler could understand, I'm not confident that it has much relation to whatever you were trying to demonstrate, so I'll stop here.


  1. There may not even be a smaller of the two lifetimes per se. ↩︎

  2. if one lifetime contains the other, the intersection is the smaller of the two ↩︎

1 Like

fine fine fine.. so never submit code from memory again.. lesson learned. :slight_smile: I made an attempt to cleanup the first call path. think it preserved most of my intentions.

Apologies, as I haven't read the thread, but the question and first few responses brought this Crust of Rust explainer video to mind:

For me, working through motivating examples (like this video) helped me understand lifetime annotations a lot better.