Variance not being the whole story?

So I was playing with the variance rule, checking if I'm understanding anything, which turns out to be a... big NO. Consider this snippet (if you prefer playground)

#[derive(Debug)]
struct Foo<'f>(#[allow(unused)] &'f str);

impl Foo<'static> {
    fn is_static_shared(&self) {}
    fn is_static_exclusive(&mut self) {}
}

fn foo(_f: &mut Foo<'static>) {
    /*
     * This line fails to compile, as expected:
     * the fact `Foo<'static>` and `Foo<'_>` has subtype relation
     * does not imply relation between `&mut Foo<'static>` and `&mut Foo<'_>`:
     * `&mut T` is invariant over `T`
     */
    // std::mem::swap(&mut Foo(&String::from("not static")), _f);
}


fn bar<'f>(#[allow(unused)] f: &mut Foo<'f>) {
    // group A and group B cannot coexist!
    
    let mut g = Foo("static");
    
    {
        // group A
        // g.is_static_shared();
        // (&mut g).is_static_shared();
        // (&mut g).is_static_exclusive();
    }

    {
        // group B
        std::mem::swap(f, &mut g);
        std::mem::swap::<Foo<'f>>(f, &mut g);
    }
}

fn main() {
    let main = String::from("main");
    let mut main = Foo(&main);
    bar(&mut main);
    foo(&mut Foo("static"));
    dbg!(main);
}

My question is around bar:

  1. If we enable only group B, it actually compiles just fine. From a generic "prevent dangling pointers" PoV, this is completely fine: we're replacing a Foo that may borrow from some other variables, to one that borrow nothing from the context of program execution. But from a variance PoV things get tricky: we have &'_ mut Foo<'f> and &'_ mut Foo<'static>, and they are not subtype of one another by the fact &'a mut T being invariant over T, but somehow std::mem::swap thinks there's some T upon which both of the operand exclusive references agree via subtype coersion. Again, such T should not exist, no?
  2. So I thought maybe I've made some wrong assumptions. Maybe g is not Foo<'static> in the first place. So I added group A. Group A itself alone, without group B, compiles just fine. It's when enabling both group A and group B the compiler becomes confused.
  3. Last but not least, when both group A and group B are enabled, the compiler actually complains about code in A enabling some variable to escape the function body.
    • Which is kinda weird since judging by signature, the only possible place it might escape is actually group B...?
    • And then again there's the fact group B alone, without group A, compiles just fine.
error[E0521]: borrowed data escapes outside of function
  --> src/main.rs:29:9
   |
20 | fn bar<'f>(#[allow(unused)] f: &mut Foo<'f>) {
   |        --                   - `f` is a reference that is only valid in the function body
   |        |
   |        lifetime `'f` defined here
...
29 |         (&mut g).is_static_exclusive();
   |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   |         |
   |         `f` escapes the function body here
   |         argument requires that `'f` must outlive `'static`
   |
   = note: requirement occurs because of a mutable reference to `Foo<'static>`
   = note: mutable references are invariant over their type parameter
   = help: see <https://doc.rust-lang.org/nomicon/subtyping.html> for more information about variance

For more information about this error, try `rustc --explain E0521`.
error: could not compile `playground` (bin "playground") due to 1 previous error

I'm assuming the reason here is Rust doing the variance type checking and subtype coersion not just at function call site, but also at local variable definition site, specifically the definition for g, i.e. the let mut g = Foo("static"); line before both group A and B.

  • If only group B were present, Rust treats the g as Foo<'f>, i.e. a subtype coersion happens here; it should be okay since Foo<'f> being covariant over 'f, and sunshine and rainbows.
  • If only group A were present, Rust treats the g as Foo<'static>, again all sunshine and rainbows.
  • If both group A and group B are present, the fact &'a mut T being invariant over T rejects the code, just like the line in foo should be commented out.
    • But then the compiler message seems a bit misleading here...?

Does such statement, i.e. Rust variance type coersion happens also at local variable definition site besides function call site, make any sense? If so, are there some materials around this aspect of Rust? Or maybe I missed something s.t. the statement is completely off and things just don't work like this?

I think what you are missing besides the variance is inference. here, the lifetime 'f is a inference variable, so is the eliminated lifetime in the type of g, when you only have "group B", the lifetime in the type of g is not necessarily 'static, since there's no other "outlives" restrictions on it.

they are not subtype of one another

Unless the types are the same, it's not possible for types to be subtypes "of one another"; subtyping is in only one direction. Once a type is decided for a variable, it's fixed this includes any lifetime parameters. So let's look at what's happening in bar:

fn bar<'a, 'f>(f: &'a mut Foo<'f>) {
    // The type of this is decided based on inference, but _once_ decided,
    // it's fixed. If you call `is_static_shared`, then there is only one possible choice:
    // `Foo<'static>. After that, you certainly cannot call
    // `std::mem::swap(f, &mut g)` since that would require `'f` to last
    // at least as long as `'static.
    // If instead you don't call `g.is_static_shared`, then type inference
    // will assign this the type `Foo<'f>` which allows them to be treated
    // as subtypes of one another since they're the same type and thus
    // `std::mem::swap(f, &mut g)` will compile.
    let mut g = Foo("static");
    
    {
        // group A
        // g.is_static_shared();
        // (&mut g).is_static_shared();
        // (&mut g).is_static_exclusive();
    }

    {
        // group B
        std::mem::swap(f, &mut g);
        std::mem::swap::<Foo<'f>>(f, &mut g);
    }
}

I suggest you don't rely on type inference or lifetime elision when trying to learn these kinds of things. You can write simpler code that shows subtyping in action. For example:

struct Foo<'a>(&'a str);
fn foo<'a, 'b, 'c, 'd>(mut x: &'a Foo<'b>, y: &'c Foo<'d>) {
    // Won't compile since subtyping only applies iff `'c: 'a` _and_ `'d: 'b`.
    x = y;
}
fn bar<'a, 'b, 'c: 'a, 'd: 'b>(mut x: &'a Foo<'b>, y: &'c Foo<'d>) {
    x = y;
}

Edit

A more appropriate code example that is like core::mem::swap is the following:

fn bar<'a: 'c, 'b: 'd, 'c: 'a, 'd: 'b>(mut x: &'a Foo<'b>, mut y: &'c Foo<'d>) {
    x = y;
    y = x;
}

Of course that's essentially the same as:

fn bar<'a, 'b>(mut x: &'a Foo<'b>, mut y: &'a Foo<'b>) {
    x = y;
    y = x;
}

since the only way for 'a: 'c and 'c: 'a is for 'a = 'c.

2 Likes

The basic principle at play here is – essentially – type inference. You don’t specify the type of g so it’s inferred; inference of lifetimes is a bit special in that the compiler doesn’t mind ambiguities (i.e. it’s happy to determine "some sensible lifetimes can be chosen here", it doesn’t actually care if there are multiple different valid choices for the lifetimes).

The lifetime parameter in the type of g is not clear from its definition, either. This is probably because the argument of the Foo constructor call qualifies for subtyping coercion; so you’re calling Foo as a constructor of Foo<'l> with some (unknown / to-be-inferred) lifetime 'l, on a &'l str argument; [this argument is the result of subtyping-coercion on the original &'static str]. And probably in addition to this, the result of the constructor call, when being assigned to the variable g also qualifies for subtyping-coercion, again. (This time though, the co-variance of Foo’s lifetime parameter matters for this.) You can however write explicitly let mut g: Foo<'static> = … to restrict it completely :wink:

The compiler errors around lifetimes can be confusing indeed. Basically, with multiple overall contradictory constraint on the lifetime parameters in play, the compiler will often have to – pretty much arbitrarily – choose some place that’s "at fault". It also (unfortunately) tends not to report on (sometimes even rather complex, multi-step) deductions that connect lifetimes of different references/types, (deducing them to be the same, or have an outlives-relation).

Basically your code in "group B", effectively makes the compiler deduce that g must have the type Foo<'f> with the exact same lifetime parameter 'f from the f: &'f mut Foo function argument. And group A makes the compiler deduce that g must have some type Foo<'l> such that “'l outlives 'static” (which in this case means as much as saying that “'l is the 'static lifetime”).

Both requirements taken together is contradict… well… almost contradictory. g cannot simultaneously have type Foo<'f> and type Foo<'static> now, can it? Actually, arguably it can, but only if 'f is the same lifetime as 'static. There’s nothing preventing this to be the case, but bar is actually more generic, so in the view of the compiler, arguably we’re just missing some 'f: 'static bound. Let’s appreciate that we do not get an error message that actually suggest adding such a bound (because that would be weird code; if 'f was supposed to be 'static, we would have written 'static to begin with – not 'f).

3 Likes

Boy, you’d be surprised how tricky these things can get… TL;DR, your statement is technically wrong :sweat_smile: at least nowadays, but it’s by no means obvious…

Now, for all the details, here‘d be some relevant context:

Basically, (arguably as a design decision aiding to resolve soundness issues), it’s now actually (and deliberately) the case that there exist types which are clearly distinct but also mutual subtypes of each other.

By the way, this does also have fun consequences on variance: Specifically, a requirement of invariance is now stricter than merely the "combination" of covariance and contravariance. More concretely (to demonstrate what kind of "combination" I’m referring to): If you use such a pair of mutual subtypes, One and Two, as type parameter for some type Ty<T> = (fn() -> T, fn(T) -> ()); then you can still coerce between the resulting Ty<One> and Ty<Two> types, but if you wrap it in a struct, like struct S<T>(Ty<T>), then that struct is inferred invariant, and the corresponding coercion for S is no longer allowed (between S<One> and S<Two>).

5 Likes

God, I fu**ing love your "technical" corrections. Honestly, once I typed that I thought to myself "someone like steffahn is going to find a loophole, and I can't wait to learn". I still find myself falling in a habit of treating types as mathematical objects like sets or even groups where in set theory (informal or not), two sets are subsets of one another iff they are the same. Similarly two groups are subgroups of one another iff they are the same. Thanks again for the education. I'll take the time to read that tomorrow. I'm off to bed now.

3 Likes

So it's since the type of local variable g being yet decided till inference kicks in, which rely on the surrounding context. And yes it's nice that at least the compiler didn't emit even-more-misleading suggestions.

I guess such inference rules, such as at which exact step covariance did kick in to help determine type of g, are more of implementation details of the Rust compiler and opaque to generic Rustaceans, or are there some reference materials?

As for the linked issue, I too did imagine subtyping to be more or less a partial ordering relation between types, but it seems it's not exactly a partial ordering... again, are there more context/materials to this topic?

The depths of type inference is definitely more on the opaque side of things. At least regarding coercions, the reference does at least in principle call out all potential kinds of places where implicit coercions (such as subtyping coercions) could be introduced.

Of course that's way too many potential places, so it's useful to gain some intuition on the kinds of fallback rules that seem to apply (those aren't well documented, as far as I'm aware) to successively eliminate potential coercion sites in order to avoid type ambiguities[1]. Lifetime parameters don't create any ambiguity errors,[2] and thus when the types on each sides of a coercion site are already known to match up to lifetime parameters, then eliminating the coercion site wouldn't have any use (and thus the compiler doesn't do it).[3] This was the intuition/context I was having in mind calling out that the argument of the Foo constructor (the type of which is clearly defined up to lifetimes by the definition of struct Foo) definitely can be subtyping-coerced.


Another thing I would like to clarify about the code example from the original post ist that the compiler is definitely not confused about any variant of this code. The confusing error messages are more of a consequence of the process of turning a borrow checking error into a (hopefully) useful error message.

Borrow checking does happen remarkably far removed from the concrete code - it's already long type checked, and even significant mly transformed, into the mid-level intermediate representation (MIR) and then there's the borrow checker whose main job is to efficiently decide essentially as a simple "yes" or "no" if the lifetimes are all valid. So useful error messages are essentially relying on the additional information besides a simple "no" that a borrow checking error can generate[4], then also needs to relate it back from the MIR to the corresponding places in the original code, and finally aim to (heuristically) generate more useful error messages in the common case than aren't only taking about lifetime parameters, outlines relationships and conflicts all the time, but whenever possible make statements more closely relating to practical program behavior.

E. g. "f escapes the function body here" was some heuristic determining this might be a useful way of framing what's going on than a more true-to-the-analysis observation like "the 'static lifetime in the deduced (missing) 'f: 'static requirement comes from the function signature of this function call". Arguably, in this example it didn't produce a good error message this way; but the thing that failed was only the "sugar" on top that tries to make error messages understandable for humans, not the analysis leading to there being an error in the first place.


  1. unless it has already become clear that the types there definitely mismatch, in which case a coercion is of course necessary, or you end up with a type error down the line, if no appropriate coercion can be found either ↩︎

  2. because they don't need to be inferred at all, the just need to collect all relevant constraints, then pass the borrow checked which, and then the lifetimes are erased and not needed for further compilation; whereas types need to be inferred and are then needed to be known for monomorphization ↩︎

  3. Specifically, a consequence of these principles is that when calling a function with a concrete non-generic function signature, except for generic lifetime parameters then their arguments tend to always qualify for implicit coercions (and qualify for implicit re-borrowing). ↩︎

  4. yet, I wouldn't be surprised if the goal of keeping the successful case as efficient as possible can result in some interesting intermediate deduction steps, not being tracked / recoverable information at all ↩︎

2 Likes

FYI this has an analogous in type theory/constructive mathematics, as you can have terms that are definitionally (and propositionally) different (just like we can have different types in Rust), but at the same time are logically equivalent (just like they can be one a subtype of the other in Rust).

The issue is that Rust exposes definitional equality of types through TypeId, so it becomes unsound to treat subtyping equivalence as equality because they can be observably different. This could be fixed if we found a way to implement TypeId without exposing definitional equality, for example by finding a normal form with respect to subtyping equivalence which can be used as a base for TypeId, however AFAIK this hasn't been done yet because because it's hard to prove its correctness.

3 Likes

I'm a bit confused. Specifically,

thus when the types on each sides of a coercion site are already known to match up to lifetime parameters, then eliminating the coercion site wouldn't have any use (and thus the compiler doesn't do it)

May I say that the compiler does not necessarily indeed carry out all the lifetime related subtype coersion, rather, it merely checks if the subtype coersion is possible, after which just delegate related information to the borrow checker to see if the borrow is indeed sound and reflects the intent of the code?

Specifically, consider the code in the original post and only group B is enabled. Is it saying that the compiler does not need nor spend the effort to double down at when exactly the subtype coersion happens during in the statement let mut g = Foo("static");: be it &'static str coerced to &'_ str or Foo<'static> to Foo<'_>; it merely checks both are legit, then just delegate the job to borrow checker, rather than pin-point the exact type at each expression?

Unless explicitly added there isn't a fixing of lifetime. Each time the variable is used its lifetime may be progressively added to. Such as a vector of references.

To fix your code to work with both A&B add in-between

let mut g = g;

May I say the introduction of this line in-between group A and group B shadows the old g, and that new g's vector of references is now fresh and empty, in particular there's no 'static thus the later std::mem::swap works just fine?

Yes, that sounds exactly like what I was trying to convey, thanks for putting it into your own words to confirm. Of course, "delegate the job to borrow checker" doesn’t mean the borrow checker comes to any kind of "decision" / "double down", either, because it’s job is merely to say "yes, it borrow-checks", not to generate any kind of new information (besides some context for the error message in case of a "no" answer).


Fun fact: the borrow-checker output is conveying even less information than this simple "yes-or-no", as it’s also a design-decision of the compiler that the borrow-checker’s “no” isn’t really a hard “no” either, but more of a "no, according to my current capabilities". In particular, the compiler (and language design) is currently set up in a way that there can never be any program behavior depending on borrow-checker output, even just depending on the simple "yes/no"-output. (Something that might e.g. seem reasonable would be language design of the form "this construct has semantics A; but if those won't borrow-check, instead fall back to semantics B".)

This is getting a bit lengthy, and arguably it’s somewhat off-topic so feel free to skip the rest of my text on this “fun fact” sidenote…

Put differently: When borrow-checking fails, this will always end with an actual compiler error; there’s no recovering from borrow-check errors. The great value that this design gives is that is ensures future upgrades to the borrow checker are possible without breakage. This is how "non-lexical lifetimes" could change the borrow checker in the past, or how "polonius" can in the future change it, so that more safe Rust programs can be accepted.


This has the consequence of making certain language design that would benefit from a "semantics X; but if those won't borrow-check, instead fall back" approach, will have to work with more basic, practical approximations to achieve the same kind of effect. The 2 examples that come to mind would be: determining the capture mode of closures[1], and the rules for "temporary lifetime extension"[2]. Of course, in a world where the compiler would be making these decisions based on actual borrow-checker output has different downsides: It could become near-impossible (or at least very hard) for a human to predict the program behavior, because the human would need to accurately understand/predict the borrow-checker. Also, you already noticed how parts of a function can be accepted individually, but not if combined; so a true "semantics X; but if those won't borrow-check, instead fall back" rule would need to figure out how to make use of a tool for checking whole functions at a time in a rule for determining the behavior of small(er) parts of the hole function at a time.

For the actual status quo, this means: the borrow checker (which is known to be a very complex thing) in Rust is fortunately playing a role where you do not ever need to develop a full mental model of it. It fully suffices to only have an intuition on "what kind of programs do always (tend to) pass the checker", which is important for being able to write Rust code that ends up compiling, and have an intuition on the kinds of unsound code (patterns) that it must reject, so that you can – hopefully – deduce from any borrow-checking error you encounter, some conclusion on what’s the underlying/“actual” problem that the code had, like the specific kind of UB or library-UB that it prevented[3]. The effect when this mental model does mismatch could be:

  • you’re surprised that some code is accepted even though you never knew the borrow-checker can reason through it ~ encountering an example from which you’re learning certain adjustments to how you structure your code to please the borrow checker weren’t fully needed
  • your program is rejected but you can’t find any reason for why it should be unsound! ~ This could be because the code was unsound, but you missed it; or maybe it was involving UB cases in Rust you hadn’t known before; or it can very well be that you’re running into cases that are simply “limitations” of the current borrow-checker, e.g. code only polonius would accept, or beyond

  1. and indeed it’s possible to write example code of closures where the 'simple' rules determine "capture by mutable reference", but the borrow checker then later says "nope" and it’s an error, even though "capture by immutable reference" would have been legal and without borrow-check error ↩︎

  2. which are by-design, explicitly a set of rules that aim to capture – by syntactic rules – some cases where borrow checking would almost certainly fail down the line if the lifetime wasn’t extended ↩︎

  3. or the kind of UB or library-UB that it would be preventing there, if not for some extra properties of the program’s behavior which only you-the-programmer can prove and reason with, but which are fundamentally inaccessible to borrow-checking; e.g. the borrow-checker will assume that every while true { … } could actually terminate without panic (which is one aspect of why loop { } is valuable), or that any if false { … } actually could have its body executed anyway, because it doesn’t differentiate true/false expression from any other boolean-valued expression ↩︎

3 Likes

Thx for your detailed walk down.

That the Rust semantics should essentially decouple from (does not rely on) what the borrow checker accept/reject, plus that the borrow checker accepting only definitely sound code (while maybe rejecting some sound code due to current capabilities), these two combined sounds about right. In some sense, leave the implementation details implementation details.

As for temporary lifetime extension, I've heard Rust 2024 has introduced some changes, but haven't really dig into it. I'm wondering if it's a bad idea acquiring lock guards via temporary lifetime extension like this, but then again I haven't read through the exact changes and it's indeed somewhat off-topic... anyway thx for your supplemental notes including NLL/polonius:

use std::sync::Mutex;

static GLOBAL_ANNOUNCEMENT: Mutex<String> = Mutex::new(String::new());

fn change_global_announcement(s: &str) {
    let locked = &mut GLOBAL_ANNOUNCEMENT.lock().unwrap(); // (abusing?) temporary lifetime extension when acquiring lock guard
    **locked = String::from(s);
}

fn main() {
    (0..10).for_each(|i| {
        change_global_announcement(&format!("{i}"));
    });
    println!("{}", &*GLOBAL_ANNOUNCEMENT.lock().unwrap());
}

I'm wondering if the : operator defines some partial ordering over the set of all the lifetime parameters, since as indicated, subtype relation isn't exactly a partial ordering relation...

I'm not sure what you mean. The type (which includes lifetimes) of a variable is fixed. At calls sites that variable may be coerced into a different type, but that doesn't change the fact the variable has a fixed type.

let mut g = g; is a little confusing because now you have two separate g variables that are completely different; thus the reason let mut g = g; allows the code to compile is because std::mem::swap is using a completely different variable whose type is different than the original g.

It is essentially the same reason why the below code is fixed:

fn foo() {
    let x = 10u32;
    bar(x);
    // If you comment out the below line, `fizz` won't compile since it
    // requires a `bool`. We create a completely separate `bool` variable
    // with the same name to fix it.
    let x = false;
    fizz(x);
}
fn bar(x: u32) {}
fn fizz(x: bool) {}

Admittedly my familiarity with type theory, category theory, constructive math, and intuitionistic logic is quite lacking despite assuring myself I will familiarize myself with them.

Even still, I suppose it's similar to how you can have two mathematical objects (e.g., topological spaces) that are isomorphic but are still technically distinct. Typically we use them interchangeably since "the structure" is all we care about. This seems to be related to the identity of indiscernibles (i.e., the mutual subtype relation is not as strict as "true" identity).

1 Like

Note sure if this helps or not because I feel like I'm at the end of the road where there is not much else I can provide, but I'll try one last attempt. The type of g below is undecided without further context:

let mut g = Foo("static");

One may think g has type Foo<'static> since a &'static str is passed to the constructor, but that is not correct. We simply don't have enough information for type inference to decide. Once you call a function that requires a Foo<'static>, however, then the decision is made: it has to have type Foo<'static> or perhaps more pedantically it has to have type Foo<'a> where 'a: 'static since that is the only way g could be at most coerced into the Foo<'static> parameter of the function.

If you call a function that requires g to have type Foo<'f> (e.g., via the call to std::mem::swap), then the decision is made: it has to have type Foo<'f>. Is this possible? Certainly since the &'static str we passed the constructor coerces into a &'f str since &'static str is a subtype of &'f str. The fact that we "actually" passed a &'static str is forever lost to the compiler though as far as the type of g is concerned. Not too different than why let x = NonZeroU32::new(2).unwrap(); "loses" the fact that x is now a u32 thus we can't use it as a NonZeroU32.

Clearly both things cannot be true since there is no 'f: 'static bound. Lifetimes really are just types with some "special sauce" like subytping and the fact one cannot explicitly pass a concrete instance (with the exception of 'static). 'f is decided by calling code thus you can't use it in any way that is not guaranteed just like

fn foo<T>(x: T) { x == 2u32 } is not valid since I don't know anything (in particular T: PartialEq<u32>) about T.

Note that the caveat of “mutual subtypes ⇎ equality” applies to whole types, whereas simple lifetime parameters do in fact follow the principle that they are “equal” if (and only if) there’s a mutual subtyping outlives-relationship.

Whether there’s any actual mathematical partial ordering going on depends a bit on what you think a lifetime parameter actually stands for.[1]


  1. I personally am inclined to question if it really stands for anything more specific at all, or if it’s rather just an abstract formalism on its own. Notably, while generic type parameters can stand for arbitrary choices of concrete types, generic lifetime variables don’t really stand for anything concrete one could substitute (besides the option of 'static). But the formal rules of working with lifetime parameters do certainly involve some form of the axioms for partial ordering, i.e. transitivity, and antisymmetry. ↩︎

1 Like

If you think about it some more, polonius-style examples actually make for some perfect example of how it's rather complicated what a lifetime stands for. Take this code:

use std::cell::Cell;

struct Invariant<'l>(Cell<&'l ()>);

fn create_invariant<'l>(source: &'l mut S) -> Invariant<'l> {
    Invariant(Cell::new(&()))
}
struct S;

fn use_it<'a, 'b>(first_try: bool, source: &'static mut S) -> Invariant<'static> {
    let returned_value = create_invariant(&mut *source);
    if first_try {
        returned_value
    } else {
        create_invariant(source)
    }
}

and think a bit about the question of “what is the type of returned_value?” :wink:
This code is accepted by polonius (not by the current borrow-checker though).


Another, related code example (accepted under polonius) would be:

use std::cell::Cell;

struct Invariant<'l>(Cell<&'l ()>);

fn create_invariant<'l>() -> Invariant<'l> {
    Invariant(Cell::new(&()))
}

fn use_it<'a, 'b>(choice: bool) -> Result<Invariant<'a>, Invariant<'b>> {
    let returned_value = create_invariant();
    if choice {
        Ok(returned_value)
    } else {
        Err(returned_value)
    }
}

(same question: what is the lifetime in the type of returned_value?)

This almost makes you want to rename the variables to something like

use std::cell::Cell;

struct CatInBox<'liveness>(Cell<&'liveness ()>);

fn make_cat_in_box<'liveness>() -> CatInBox<'liveness> {
    CatInBox(Cell::new(&()))
}

fn look_at_cat<'liveness>(_cat: &CatInBox<'liveness>) -> bool {
    rand::random()
}

fn experiment<'alive, 'dead>(choice: bool) -> Result<CatInBox<'alive>, CatInBox<'dead>> {
    let cat_in_box = make_cat_in_box();
    let is_cat_alive = look_at_cat(&cat_in_box);
    if is_cat_alive {
        Ok(cat_in_box)
    } else {
        Err(cat_in_box)
    }
}

doesn’t it?

2 Likes