use std::thread;
fn main() {
let v = vec![1, 2, 3];
let handle = thread::spawn(|| {
println!("Here's a vector: {v:?}");
});
handle.join().unwrap();
}
The main thread won't exit without the spawned thread being joined anyway. Even if the OS schedules that thread for later, the main thread is still blocked on the call to join.
That's a use-case for thread::scope, which does the same thing your program is doing, but has the type signatures that allow you to capture non-'static data.
However, there’s a problem: Rust can’t tell how long the spawned thread will run, so it doesn’t know whether the reference to v will always be valid.
In other words, the compiler can't look at the call to join and reason about when the thread will be finished. It's not intelligent. It only uses the rules encoded in the type system.
It doesn't have to look inside the definition of join. It just needs to know that there is a call to join, which is blocking until child threads are done, and that the borrowed reference's scope will come to an end immediately after the call to join returns.
Maybe I am missing something here, but given that the compiler is intelligent in so many other ways, I thought these kinds of cases may not require as much intelligence.
This implies the compiler knows about join, i.e. either join or the "join" pattern is special and encoded in the compiler. This can make sense as long as you think of just the single case/instance, but if you start adding special cases like these it won't scale. Instead the approach is to make the language/compiler support a reduced number of generic features over which you can build all your libraries.
In the case of std::thread::spawn it requires T: 'static, and there's no generic feature that allows the stdlib to specify "well, unless you call this particular method on the vale this function returns, then it's only borrowed until you call it", that's very specific!
std::thread::scope covers this gap by using a closure to ensure that join will be called. It then uses some generic lifetime bounds to ensure that whatever you borrow in the threads will be valid at least until it internally calls join.
Apologies if this is an amateur question - Couldn't it be done like Drop trait? Please correct me if I'm wrong - Unlike custom traits, isn't Drop a compiler-aware trait? Ofcourse, there could be cases where such a trait for thread-join wasn't implemented correctly, which in turn, could lead to bugs.
If you mean to always call join in the Drop implementation of the JoinHandle (the value returned by std::thread::spawn) then this is how it used to work before rust 1.0, but the approach was found to be unsound because it's possible to leak it, i.e. cause the Drop implementation to not run. The whole story was named "leakpocalypse"
But you haven't handled them, did you? That code, as written, includes bugs and can end up in a situation where code is accessing memory that's already gone… precisely the whole point of Rust existence.
It's not intelligent at all. It just follows the rules of the type system, and the rules of certain special traits.
If I understand, you're suggesting that a new special trait for thread lifetimes could somehow be designed to solve this problem. I don't know how, but perhaps. However, the Rust developers decided to solve it using the thread scoping mechanism that @jofasposted above.
Because the join() method involves inter-thread communication, and the fastest way to communicate between threads is to use atomic variables. So using Arc to wrap the data you want to share is usually better than using the join() method.