Let's say you are writing AVX instrinsics wrapper, so you eventually ask yourself a question: does SIGILL cause undefined behavior? After some though, i think it is reasonable to claim that it is UB, even if it crashes your program right after.
Basically if SIGILL is UB, then it's reasonable for a compiler to assume SIGILL never happens, and that AVX intrinsics are infallible. If AVX intrinsics would have been fallible, then the compiler wouldn't have been free to reorder some operations, in order to preserve consistent state with the code before the crash occurs.
Also if you know a citation for that somewhere, please let me know.
My current uneducated view says no. I would describe it as "out of scope".
The compiler is given a target which the machine code it produces never SIGILLs for. It is free to determine such instructions are infallible, hence reorder (by default, unless given specific instruction.)
If you're inside a #[cfg] scoped to platforms that guarantee SIGILL delivery, then it's platform defined behavior and you can implement coherent signal handlers.
However, just use the standard library CPU feature detection functions.
The CPU the program is currently running on supports the function being called. For example it is unsafe to call an AVX2 function on a CPU that doesn’t actually support AVX2.
Calling unsupported instructions is the entire reason that most of the vendor intrinsics are marked unsafe. So yes, it was judged to be UB.
I change my mine, partially. Go with the doc "you need to ensure" as BurntSushi points out.
But for the case you compile with -C target-cpu= but then run on unsupported I still think your out of scope rather than UB.
Calling the vendor intrinsics is in fact definitely objectively UB when the target CPU does not support the intrinsic.
Running an executable on a CPU that is not sufficient isn't Rust-level UB in the normal way, because Rust's already been compiled out. "Out of scope" may be a better way of putting it, but Undefined is also valid, as Rust does not define the result of running a compiled executable on an insufficient CPU.
Where it's interesting is asm!. asm! is a bit interesting from a spec/UB model; it's semantics are quite literally "run these instructions on the target machine." So there, I argue, the result of executing an undefined instruction may in fact be defined.
...but note that it may still be undefined at the chip level, unless your chip vendor has said otherwise.
Also note that attempting to execute instructions on cpu's that don't support doesn't necessarily cause SIGILL. It can be interpreted as a different instruction. For example lzcnt on cpu's without ABM is executed as bsr as lzcnt is encoded as F3 0F BD /r and bsr as 0F BD /r. Older cpu's interpret the F3 as unused rep prefix and thus think lzcnt is rep bsr.
You think lzcnt is bad? Some microcontrollers have formally undefined instructions which may behave differently depending on battery level (basically: if voltage is high enough move of register happens fast enough for ALU to pick it up, if battery is low then ALU picks random garbage).
How can you handle these except by saying that's UB to use them?