Why does std::any::Any need static lifetime?

I am implying a trait over tokio::net::TcpStream, the trait requires a function as_any which returns &dyn Any, and Any requires static lifetime, but the TcpStream is not static at all, we read it, handle it and close it, I am curious about why Any needs to be static?

1 Like

T: 'static doesn't mean "never dropped". String is 'static, as is TcpStream as well.

'static is "contains no lifetimes" or "valid until program exit if not dropped". If you own a T: 'static value, you can hold onto it and use it forever; nobody else can invalidate it without you giving it away.

Given T: 'a, that doesn't state that T lives until 'a ends nor even that it dies when 'a ends. It is merely a bound that says that the value of type T will be valid while 'a is valid AND you hold onto the value.

To answer the OP question, though, Any requires 'static because lifetimes are erased, thus the TypeId of &'a Data and &'b Data are the same. You can't put the lifetime scope into the TypeId because two calls to the same function give it lifetimes with distinct begin and end points (thus must not be allowed to unify) but run through the same code. And because functions can be recursive, the set of lifetimes possibly potentially instantiated in a program isn't even a calculable nor even finite number.


Interesting, I thought static means over the whole program life time, I never know that String is a static also.

Per “the book”, ‘static means

One special lifetime we need to discuss is 'static , which denotes that the affected reference can live for the entire duration of the program.

… emphasis is on “can” (different from “is”).

This translates to ‘static being a “subtype” of ‘a because ‘static can be used anywhere ‘a is required (the opposite is of course not true, and thus the subtype relation).

The namesake is regrettable (many have said this) - it has nothing to do with static memory. Furthermore, anyone new to Rust likely already has a strong, and correct understanding of what static means. The “gotcha” is extending that understanding to the Rust lifetime context; using the same word does more harm than good (but once you learn the difference…).

It’s that much more confusing because static memory lives as long as the application. In contrast, ‘static lifetime is always “long enough”. Its only relation to the scope (if you will) of the application, is that the app sets the maximum duration of what ‘static could possibly ever be (self evident, and thus rarely said explicitly, but is the only, arguably narrow, basis for the overlapping terminology).


Thanks for the explaining, may I ask another question not really related to this? I wonder why Read trait is not implemented to tokio::net::OwnedReadHalf?

… type erasure? I understood that the compiler derives the lifetime at a function level, once. The annotations are there only to disambiguate between two or more possibilities where any of the possibilities don’t “follow the borrowing rules” (said differently, the compiler assumes worst case, does not presume best case; thus requests that the programmer be more specific). There is no type to erase like how the term is used to describe trait objects. No?

1 Like

None of Tokio's IO resources implement the non-async Read trait. It implements the AsyncRead trait instead.

My favorite way to explain this is that T: 'a means "T must not have any lifetime annotations shorter than 'a". A type like String has no lifetime annotations, so it doesn't have any shorter than 'static.


Yes, just wonder why it does not have a method say into_std_read.

Cool, another unknown rust trick, thanks you guys, I will give the first correct answer solution:)

Would it be ok to say that any of the values that 'T depends on, its dependencies, cannot have a lifetime shorter than ‘a? (Whether elided or annotated)

Only to be that much more explicit, could I extend what you said to include:

… a missing lifetime annotation not because lifetime is elided, but because lifetimes only come into play with borrowed references to values in memory. String is an owned value, not borrowed, so no lifetime.

@alice One of the motivations for my perhaps overzealous addendum is that I can’t help but wonder: in order for the compiler to assess that the lifetime rules are being followed, a process that involves comparing lifetimes, during this process, does the compiler create a proxy value for owned values, or does it just eliminate/ignore owned values, said differently, does the assessment require including owned values?

My first thought is no, no need to include owned values. Lifetimes only become an issue when we have multiple refs to the same value and what I want to conclude is: furthermore the relative lifetime values are not impacted by the presence of owned values — Therefore we can ignore owned values when debugging lifetime errors.

You are not wrong.

But that's obvious.

You are mixing lifetimes of types and lifetimes of variables.

Think about it. u8. As static as they come. One byte of memory with arbitrary value. As 'static as they come, it's hard to imagine something even more 'static. Not only variables of type u8 may live for the whole program lifetime, many of these live “forever” since they are baked into the executable! As part of 'static str, mostly.

But language where every u8 have to live for the lifetime of the whole program would be truly strange, indeed. And very unusual.

if type have a certain lifetime then value of that time or variable of that time may live for that lifetime.

But they may live for a shorter time, too, there's nothing wrong with that.


You can do this using SyncIoBridge. Another option is to read data into a Vec<u8>, then use the fact that references to a vector are Read.

I'm not completely sure what this even means. However, it sounds like it wouldn't include cases like PhantomData<&'a u8> that are not 'static since they have a lifetime annotation.

Well, the reason that String has no lifetime annotations is indeed that it is an owned value, but I wouldn't include this in the definition of T: 'a.

Lifetimes are generally always the duration of a borrow rather than a duration of a value.


In a function or generally any scope, does the conclusion of pass/fail lifetime checks depend in any way on the presence of owned values? …so a situation where I have both shared and owned values.

1 Like

Well, every time you borrow from some variable var, the compiler will attempt to find a lifetime (region of code) 'a such that:

  1. Any value annotated with 'a (or a sub-lifetime of 'a) happen before 'a ends.
  2. Any incompatible uses of var happen after 'a ends.

Here, an incompatible operation is stuff like "modifying the value" or "moving the value" or "the value goes out of scope".

If the compiler can find such a 'a, then the program is accepted, otherwise it is rejected.

So I think that's a yes? The check also depends on var.

1 Like

Thank you @alice . I appreciate your working through my convolutions :))

In the body of a function, I can see where there can be an interaction (influence) of owned on lifetimes. However, i wonder if there is not a missed opportunity for describing a “separation of concerns”.

While difficult to fully describe functions separately, if I were to focus like the compiler does on the type signature of a function + self, that a lifetime error can be solved ignoring owned values.

Separate from lifetime errors, we have ownership + aliasing related errors.

To level set, lifetime features of Rust are separate from ownership features (my understanding, and statement would be: completely independent of/orthogonal to one another, is that wrong?).

They get convoluted and “made circular” because of what you have described (improper use error, eg., mutation of owned whilst ‘a lifetime of a borrow of that owned value remains in scope…). I say “circular”because I can “get it to compile” by not mutating the value, but instead… Or, I could make it compile focusing on the lifetime issue.

The big “so what” for me here is this: I’m beginning to believe it might be possible to eliminate my often circular reasoning for how to solve “new” compilation errors (lifetime and ownership related errors). By process of elimination, changes in owned values in the type signatures are not the problem. Separately, ownership errors can be attributed to the scope structure in the body of the codebase.

Finally, it was inspired by this thread because the answers seemed to relate a lifetime issue with String, an owned value… increasing the chance of landing in a circular thought process.


1 Like

The thing is, there isn't really such a thing as an "owned value". That's a pedagogical simplification that we use for conceptual clarity, but it's not something the compiler considers or needs to consider. What the compiler knows/cares about are the lifetime annotations of types and the (actual or potential) references each type transitively contains. "Owned types" are thus simply the types that don't transitively contain borrows. Consequently, they don't need to be "ignored". Thanks to vacuous truth, lifetime checking doesn't actively need to go like

  • Does this type contain any lifetime annotations?
    • if yes, check them
    • otherwise, don't care

Instead, the compiler can just check every lifetime annotation, and this includes the lack of opportunity for lifetime-related errors when there are no lifetimes involved.


You should probably be more precise about what you mean by owned value. Box<T> is generally considered an owning construct. Is Box<T>: 'static then? Not necessarily. T could be a &'short U of some sort, for example. In this sense, you can own something that's not 'static. [1]

Anything with a type or lifetime parameter that is not itself bound to be 'static can be parameterized into something with a non-'static lifetime. So if your definition includes "satisfies a 'static bound", it wouldn't apply in almost any generic context, which is a somewhat severe limitation.

You can create references to owned values (whatever the definition is). You can "own" references, in the sense of say a Box as discussed above. The lifetimes of such "owned" things can get intermixed with the lifetimes of other things (consider a Vec<&'a str> you push into or pop out of).

Whatever your definition of "owned", I'm going to presume it includes things that get dropped when you exit a function (or other drop scope).

Types with a destructor contain a use (via &mut self) at the end of their drop scope, which is also considered to be able to observe any lifetimes that are part of its type. A generic type parameter is also assumed to have a destructor [2]. This can result in "used here, when foo is dropped" related borrow check errors.

This may include types that look "owned" in some sense, like in a fn f<T>(_: Thing<T>).

You haven't really defined the split, but I'm not thinking of any definition where I would consider them independent.

I'm not certain, but perhaps the "circular thinking" comes up because you're thinking in terms of causality, where as the borrow checker is more of a constraint satisfaction solver (even though the errors are presented as "A because B" as an attempt to make things understandable).

For example, does a reference with an active lifetime preclude use of the value referred do, or does the use of the value cut short the lifetime of the references? Was the error "because" I used the owned value in one spot, or used the reference at a later spot? You can think of it either way, but really the problem is "the constraints are unsatisfiable -- these two cannot coexist."

  1. Even if there are no references involved... ↩︎

  2. unless bound by Copy ↩︎


Helpful, especially as I dig into the next post that argues I need more to describe what is meant by my use of the term ownership.

Got it.

... every elided and explicit lifetime annotations. Got it.

This is a useful term. I was tempted to use the term "null" to describe a lifetime value for whatever I mean by owned types. All things considered, is now moot because the value is never found/included in the lifetime analysis anyway.

Modulo my perhaps ambiguous meaning of ownership (per yours and the next post), it looks like I can conclude as I have for function type signatures: Adding or subtracting owned values in itself, will not impact the lifetime analysis (by definition of not even being part of the analysis in the first place).

That is a useful anchor point whilst navigating these interrelated error types.

Perfect extension of the thought as it applies to parameterized types. Perhaps what I was saying only holds for concrete, non-parameterized types? (I think in the end it holds)

In my understanding of an important corollary to the Rust lifetime rules + a helpful decision of the compiler to exclude UB in its range of options to consider, is that the container must inherit, or be cast to a type with the lifetime of the values it wraps to avoid UB. No choice (I come to how it fits with drop in a bit).

So while the combination Vec as owned value x 'short U is theoretically possible, to avoid UB, Vec can only ever live as long as that of the values it wraps (blur your eyes for how an empty Vec remains consistent with this idea). So I can instantiate 'short U before I instantiate the memory for Box, but not the other way around (I could say the same for Vec where the type of what it hosts needs to be known when instantiated). Using the word "owned" to describe a Vec that hosts 'short U seems to be inconsistent with what is permitted in Rust (using safe Rust that is). I wonder what the reasoning is for calling it an owned value...

Box<T> in Rust does marry the ownership expectations to memory on the heap, ownership = "given I'm the owner, I'll call drop"

Boxes provide ownership for this allocation, and drop their contents when they go out of scope (source)

... but nor can it ever escape the lifetime constraints of the values it "owns".

So unlike the ability to avoid the need to know the size at compile-time using a ref (like Box), the only way for Rust to avoid UB when something like Box hosts owned references, is to do me "a solid" by always casting the Box (if you will), to a type with whatever lifetime "does the job".

Between the need to live long enough to call drop, and not living longer than that of it's contents, the lifetimes have to match (where they don't is "invisible" to my safe Rust code).

To your point about the limited capacity to attribute causality (great point), the "used here, when foo is dropped..." is likely easier to solve when modeled as an aliasing issue. How could I have referenced a value after it was dropped if there was not an alias present that tried to read the dropped value?

All of your points linking my use of "circular" to a hunt for causality is spot on... including the limits therein. Period. That said, I have a bias that attributing causality is useful, that a construct for causality in a collection of constraints is possible. A "generic" requirement for causality is sequence, A happens first, that later causes B. A similar sequence can be constructed by order of precedence: in order for A to work, I have to avoid UB. In order for B to work, A must work, which requires I avoid UB...

To resolve where both A and B must work simultaneously, I was hoping to identify that only one need work depending on the context, where I am in the code base: "the only way for A to fail, is if the function type signature is wrong" (causing a "does not live long enough" error), "the only way for B to fail, is if the scoping in the body of the function is wrong" (causing a "use after drop" error). In that way, in each context, I only have one thing that needs to work. I may not have succeeded. Your point is the dominant useful view of things. However, when dealing with the hyper "same but actually really different" type of concepts such as discussed here, modeling some sort of linear sequence of causality may be sufficiently useful to warrant the extra layer of abstraction :))

1 Like