I've run into the question on SO, and got curious. So, I've tried to run the following:
use libc::kill;
use std::process;
fn main() {
unsafe {
kill(process::id() as i32, libc::SIGSEGV);
}
}
Unexpectedly, this program runs without error - playground.
However, the same code with other signals (SIGUSR2, for example) exits as expected, and the C code with SIGSEGV also crashes as expected:
int main()
{
kill(getpid(), 11);
return 0;
}
Is Rust doing some special handling of segmentation faults?
Yes it would happen the signal is handled by rust, I can reproduce your example.
fn main() {
use std::io::{stdin,stdout,Write};
let mut s=String::new();
let _=stdout().flush();
stdin().read_line(&mut s).expect("Did not enter a correct string");
}
then kill -SEGV <pid> first does not affect this program, and after a second invocation does result in a sigsegv.
I couldn't find an explicit reason for this, although this seems related to the handling of stack overflow. Here are some links discussing the existence of the handler:
P.S. side effect of that is that first sigsegv gets ignored in your case. Which actually is bad I think. I'm actually not sure why they don't abort in both cases
The intent is that returning from the SIGSEGV handler (without having "fixed" anything) will cause the same fault again. Then the default kernel handler will take control, setting the right signal exit status, saving a core dump, etc. We would miss that stuff if we just aborted.
But with a manually killed signal, there's nothing to cause it a second time.
If you return from an actual SIGSEGV fault, you'll resume at the same instruction pointer that caused the fault. If the handler didn't change anything to improve the situation, it will fault again. Since Rust's handler resets itself to the default, the re-fault will have the kernel terminate the process.
Try with something like dereferencing a bad pointer, and it will die on that SIGSEGV.