To add a little bit of context, C has separate operators for logical negation (!
) and bitwise negation (~
) in significant part because before C99, C didn't have a boolean type, and instead if
works by comparing the controlling expression against literal 0
. So C needs a separate logical negation that maps any nonzero value to zero. This is also why you'll sometimes see pre-C99 projects do #define TRUE (!FALSE)
instead of using 1
.
Rust uses strongly typed semantic dispatch with a logically single-bit bool
, thus making “logical” negation the same thing as bitwise negation.
For primitive types, it is fundamentally equivalent. And LLVM is very good at doing this kind of logical equivalence optimization, so long as your type isn't doing something silly like violating De Morgan's laws. (Like, say, f32
does.)
Comparison is implemented in terms of three-way comparison. !(a > b)
is !(matches!(cmp(&a, &b), Greater))
and a <= b
is matches!(cmp(&a, &b), Less | Equal)
, which are trivially semantically identical to LLVM's optimizations.
I agree such a lint would be reasonable, especially since (!a) > b
doesn't lint against unused parens.
I was going to note that this is actually a different operation for PartialOrd
types like f32
, but that's actually a different lint from the one for Ord
types.
(Since I'm making a post anyway) Generally I've found the most readable option to be to almost always use <
or <=
, writing the compared values in expected-increasing order. If evaluation order of the temporaries matters, hoist them to named variables.
I only use >
or >=
for comparing a variable value to a constant (or at least invariable) value, but even then I'm replacing if idx >= len
with if len <= idx
more and more lately. It's just if 0 < var
that still feels like a weird “Yoda conditional” to me (unless it also has && var < cap
).