The nomicon states: "One particularly interesting piece of sugar is that each let statement implicitly introduces a scope."
This makes sense (seems cleaner than talking about "shadowing"), but are there any exceptions where this statement is not completely true in some way? Or is it literally and exactly the case that a let implicitly introduces a new scope?
BTW, I found the podcast from Blandy and Orendorff informative for new users (maybe old ones too):
I guess it's the right place to ask such a question, but nobody is sure how to answer. I'll try, but take it with a bit of salt, because I don't have an authoritative answer either.
It is true in the sense that you can use the variable from that place on and that the variables are destroyed in the reverse order (if they have any kind of destructor). So it is at least a valuable mental model.
Is there a specific situation why you ask the question, or just because you try to understand in general what happens? If it's the second, I think you'll just get your own mental model of what happens over time and it won't be any worse than any other way others look at how it behaves.
Another question is how it is implemented in the compiler. For that, I have no idea.
On the other hand, once nll lands, it'll get more complicated, because a borrow can disappear sooner than at the end of the scope. In that sense, it'll no longer be true, or feel like being true.
I'm happy with my mental model of let as a scope. However, I'd been told this model was incorrect so was trying to see what the flaw was.
I'm assuming the compiler doesn't push/pop symbol tables.
Thanks for the nll pointer --- that's a good direction for things to go in. I see what you're saying about how that could change the feeling. However, I look at it (perhaps falsely) as bringing the "as if" rule from optimization to type-checking. E.g., optimization passes can replace code C1 with C2 as long as C2 behaves "as if" it was C1 for programs that obey the language contract. You can extend this to type checking where you accept a program P1 (even if it doesn't strictly type check) if it must(*) behave the same as some program P0 that does type-check. In the case of nll --- by definition a variable v isn't read after it's last use so you can treat the last use as a kill.
In any case, neither here nor there (and perhaps not correct either :). Thanks for the nll pointer. I'm glad Rust is going in the direction of simple but more useable. Extending the type system using facts derivable from domination/post-domination calculations seems very reasonable.