Can a detached thread panic after main is over?

The code below implements an infinite generator that produces odd numbers in a pipeline fashion. In the thread labelled filter, I explicitly call panic!() if there is an error while receiving a value form upstream because an infinite generator should never run out of values (an overflow in produce is not a problem; it simply restarts the generator). Normally, if no more values are needed, the caller drops the receiver inf returns which causes the resources associated with the pipeline to also be dropped. But what happens if main finishes and r is still around? Is it possible that the thread produce is shut down first, then filter calls r1.recv() and panics because the corresponding Sender has already been dropped?

use std::{
    sync::mpsc::{sync_channel, Receiver},
    thread::spawn,
};

fn inf() -> Receiver<u8> {
    let (s1, r1) = sync_channel(1);
    let (s2, r2) = sync_channel(1);

    // produce
    spawn(move || {
        for i in 0.. {
            if s1.send(i).is_err() {
                return;
            }
        }
    });

    // filter
    spawn(move || loop {
        let i = match r1.recv() {
            Ok(msg) => msg,
            Err(_) => panic!(),
        };
        if i % 2 != 0 {
            if s2.send(i).is_err() {
                return;
            }
        }
    });

    return r2;
}

fn main() {
    let r = inf();

    let results: Vec<_> = r.iter().take((u8::MAX as usize + 1) / 2).collect();
    assert_eq!((1..=u8::MAX).step_by(2).collect::<Vec<_>>(), results);

    let results: Vec<_> = r.iter().take(2).collect();
    assert_eq!(vec![1, 3], results);

    let results: Vec<_> = r.iter().take(3).collect();
    assert_eq!(vec![5, 7, 9], results);
}

r is dropped at the end of main, as usual. The only way to have the problem you describe, AFAIK, would be an explicit leakage, for example, via mem::forget.

1 Like

No, there's no cleanup process. If main finishes and there's still other threads running, these threads are basically just aborted without any cleanup at all. No destructors will be run, etc. Hence, there's also no possibility of the producer thread to be “shut down first”, because there's no “shutting down” happening, the remaining detached threads just stop executing when the process exits.

If you don't want this, i. e. if you prefer your destructors to run (which isn't all that necessary in this case, because there's no resources involved that aren't getting released by the OS anyways) then you could have inf() also return some handle that allows joining with both threads (or perhaps just joining with the producer thread may be enough :thinking:). Since you expect the threads to shut down when the receiver is dropped, the main function could then manually first drop the receiver, then wait for the threads to shut down. Or you could combine the Receiver and the thread handle onto a custom struct that does this dropping and joining in its destructor.

Note that in debug mode currently, the 0.. iterator will panic on overflow, so in this case the filter thread will fail to receive and panic, too.

1 Like

It might be theoretically possible, after main has returned and while things are being cleaned up, for the producer thread to be aborted while the filter thread is still running.

But I don't think it matters, because the worst that can happen is the filter thread panics and starts to unwind before it gets aborted by the main thread exiting, which was about to happen anyway. This isn't possible since the Sender doesn't get dropped when the thread is shutdown.

What do you mean by "aborted" and how do you think this could result in the filter thread reaching that panic?

When the process terminates, it's not necessarily the case that all threads stop running at the same time. Is it? I don't believe std makes that guarantee, and I'm not familiar enough with how it works on either Windows or Linux to say for sure if this failure mode is possible on one of those platforms. But it stands to reason that when you have 3 threads running on 3 CPUs, and main returns causing the process to terminate, the other 2 threads might be killed off in either order. I wouldn't like to assume that process shutdown is instantaneous.

The usual way to terminate threads is to send them a message asking them nicely to terminate themselves, and then join.

1 Like

I get the argument that the threads don’t all stop running at the same time. This doesn’t have to mean that (the “shutting down threads” part of) “process shutdown” isn’t instantaneous in the sense that one thread might already be shut down while another is still running. The reason is: AFAIK, there is no procedure of “shutting down” threads; shutting down threads doesn’t have any effect; if it’s not doing anything then it can be truly instantaneous. You could declare some point in time after the last thread stops executing to be the instant that all threads are considered to have shut down. Before that, all the threads would have still been considered running, just not actively being executed by the scheduler.

If the producer thread already permanently stopped running while the filter thread still is running, there is no way the filter thread could know that the producer thread stopped running.

Of course shutting down a whole process isn’t instantaneous. AFAIK, e.g. files held open by the process will be closed, of course its RAM usage will be freed, etc, and – admitted – I don’t know much about the details of this myself, either. I would strongly suspect that any of these cleanup actions only happen after all the threads of the process stopped running.


Talking some more about why the filter thread couldn’t panic: The r1.recv() call can only error due to the corresponding Sender being dropped if that Sender is actually dropped. If the Sender lives in the producer thread and that thread just stops executing, there are no destructors being executed and thus the r1.recv() might block but can’t fail.

2 Likes

:man_facepalming: Of course, you're correct.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.