The if false case is mildly surprising, but in general, not emitting dead code warnings based on the values of constants is a deliberate choice. The point of introducing constants is that you encode some logic in a way which makes it easy to change later, if you decide that you want different values for your constants. For example, you can have a constant with different values based on the specific #[cfg] flags. #[cfg] expansion happens before any kind of type checking or semantic analysis, so the compiler has no way to know whether FOO is really always 0, or only in this specific case for this specific compilation target. In the latter case, emitting dead code warnings would definitely be counterproductive, because the code isn't really dead, it's just unused for this specific build target.
The if false case could probably be linted against, but it's also something that rarely if ever happens in real-world code. This isn't C where constants are defined by preprocessor macro expansion. A bare false is very unlikely to be written in a conditional. In real world you would expect a constant, or even a constant expression, and then we're back to the previous point --- linting based on the specific values of constants is generally undesirable.
Going even further, why focus on constants? We could lint based on the runtime values of variables. For example, a lint could be emitted if in fn foo(bar: bool) the actual passed value of bar is always false. Some languages actually implement such lints (e.g. Kotlin), but personally I find them more annoying than useful. Yes, it can sometimes uncover useless API overcomplications, but more often than not the choice of having a parameter is dictated by API and implementation evolution reasons. The fact that the actual value of a parameter is always false at this moment in time doesn't mean that no one will ever use a different value, and a function should implement some reasonable self-contained stable piece of functionality, rather than change constantly in response to end user requirements.
Another issue is that a lint based on the values of constants would be quite flaky. Improvements to the constant evaluation & analysis may uncover more cases where some condition is always true or false, meaning than a minor toolchain upgrade may cause many more (or occasionally fewer) warnings to be emitted. This is also mildly annoying, and can cause issues in CI if someone decides to #[deny] this warning.
The "proper" version means putting all the code into an SMT solver and letting it chug for a while to see what it can learn. That's never going to happen as part of a normal rustc compilation, because it's NP-hard, so would absolutely ruin compilation time and has no good way to be consistent across machines.
But the formal methods folks are working on separate tools to run against your rust code -- and to add ways to annotate more expectations in your code -- that will be able to find things more generally around "this will always panic" or "good, I proved that this addition will never overflow".