Traits in `std::cmp` and mathematical terminology

I think we should explain traits like PartialEq and Ord by analogy (e.g. "PartialEq is kinda like partial equivalence") to help people with a mathematical background grasp the general idea, but we need to sure people understand that these are analogies (albeit quite accurate ones) and not literal definitions like you have in mathematics.

Besides the obvious syntax, traits don't really have a rigorous definition like mathematical terms and are instead intuitive things humans use to communicate complex/ambigous ideas.

For example Sync isn't a precise concept, instead it is an emergent property that arises from the way the ecosystem as a whole uses the trait. We all kinda know that it represents something which allows concurrent access to an object, but if you look at 10 different APIs with a T: Sync bound you would see 10 slightly different interpretations.

2 Likes

I think it depends on whether you have a mathematician reading the docs. I'm not really a mathematician, but when I read PartialOrd, I automatically assume "oh, that's a partial order" (as I have worked with partial orders in the past). I would think so even if the docs didn't use the term, but just because the trait is named "PartialOrd". It also helps me to understand that I can use PartialOrd to implement a partial order, but it will confuse me if I'm suddenly surprised by >= and <= not acting accordingly. (And I was very confused from the discussion of this topic :hot_face:.)

I would say the mathematical terms should be mentioned in the docs, but only when and if they are correct. And sometimes it's difficult to judge about it because in the end, == is neither equality nor equivalence, but simply an operator in a programming language (Rust).

I would say it's okay to call PartialOrd a partial order, but the docs deserve a warning that only < and > act accordingly (and <= and >= do not resemble the corresponding non-strict partial order, unless Eq is implemented as well).

Not sure what to do with Ord though. Currently the docs say it's a "total order".

Silly question, but would strict weak ordering vs total preorder make much of a difference to the way people write code in practice?

Not sure. The semantics of Eq might have an impact on when and how it's used.

From the previous thread:

I would say f32 and f64 have a (strict) partial order. Numbers compare as expected, −∞ < 0 < ∞, NaN ≮ NaN. But the <= and >= operators can't behave like a non-strict partial order because NaN != NaN.

I would say the IEEE floats in mathematical terms have

  • a strict partial order

in combination with

  • an (independent) partial equivalence relation.

And yeah, it is kinda funny that PartialOrd was likely introduced because of IEEE 754. Or were there other reasons?

I'm not sure what SQL's comparison regarding NULL is in mathematical terms. I guess it'd require some exotic three-valued logic.

Edit: Sorry to have pulled this over to the new thread. I wasn't sure what to do. @steffahn gave a nice(er) answer in the other thread.

As for what should practically change in the docs IMO

  • Ord documentation should mention "total preorder". IMO, it's okay to keep listing "total order", too, since that may help the intuition of people less familiar with more uncommon types of relations. In any case, the most important part is that the axioms - as stated in the docs - are correct. Which they are, except for...
  • the axioms for PartialOrd need to be fixed: I.e. the fix is either to add an axiom like "a == b and b < c and c == d implies a < d" Edit: a set of axioms like “a == b and b < c implies a < c. And: a < b and b == c implies a < c”, or to require that <= is transitive. Which is funny because once you require that <= is transitive, a bunch of other stuff (i.e. everything that doesn't just specify how different methods are supposed to be defined in terms of each other) technically becomes really entirely redundant. In any case, I haven't thought about at all yet, what the best presentation here might be. I do like the appeal of keeping things really simple regarding "<= ought to be transitive, and the rest has to be as-if defined in terms of <="; but these properties also have to stay straightforward to check if (which is commonly the case) you do not define all your stuff in terms of <=.

But the missing axiom isn't required to make PartialOrd (in regard to <) a partial order, right?

What exactly would be fixed? It still doesn't make <= act according to a non-strict partial order.

Is the opposite also true? That the "bunch of other stuff" assures that <= is transitive?

So if I understand it right (really not sure if I do), we could either demand <= being transitive or keep the (then redudant) other demands?

P.S.: I can also check myself later. This was just a quick response. I should rather be going to sleep :wink:

… 5 hours later

Yeah, but the docs already contain extra axioms besides the fact that the < relation of PartialOrd is (technically) a strict partial order. Btw, you cannot make "PartialOrd is a struct partial order" out of this, since e.g. == is a part of PartialOrd, too (via the PartialEq supertrait), and == cannot be defined in terms of < (but everything can be defined in terms of < and == together).

It would fix that this makes sure that <= is transitive. You cannot seriously tell me the authors of PartialOrd did not want to have <= be transitive. But to be honest, I'm also not familiar with any algorithms / API, that requires just PartialOrd, so I'm not actually entirely sure what these axioms are used for anyways.

Yes, but only as long you add the congruence axiom I've mentioned, i.e. "a == b and b < c and c == d implies a < d", the transitivity of <= is a consequence of this axiom together with the stuff listed in the documentation of PartialOrd. Seems like you didn't do didn't read your "homework":


BTW, note that those PartialOrd docs do also list a few truly redundant things like the transitivity of > or ==. (The transitivity of > is even also listed as a corollary, the transitivity of == is part of the conditions of PartialEq.)

:see_no_evil:

To give a concrete example how <='s transitivity is missing in the currently documented axioms:

#[derive(Copy, Clone)]
pub enum O { A, B, C }
use O::*;

use std::cmp::Ordering::{self, *};

impl PartialOrd for O {
    fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
        match (*self, *other) {
            // all values equal themselves, A == B
            (A, A) | (B, B) | (C, C) | (A, B) | (B, A) => Some(Equal),
            // B < C
            (B, C) => Some(Less),
            (C, B) => Some(Greater),
            // A and C not comparable
            (A, C) | (C, A) => None,
        }
    }
}

impl PartialEq for O {
    fn eq(&self, other: &Self) -> bool {
        self.partial_cmp(other) == Some(Equal)
    }
}

impl Eq for O {}

fn main() {
    // violating congruence
    assert_eq!(A == B, true);
    assert_eq!(B < C, true);
    assert_eq!(A < C, false);
    
    // or
    
    // violating `<=`'s transitivity
    assert_eq!(A <= B, true);
    assert_eq!(B <= C, true);
    assert_eq!(A <= C, false);
}

(playground)

this enum fulfills1 all the axioms listed in the documentation for PartialEq, PartialOrd and even Eq, yet behaves quite surprisingly lMO.

@jbe, looking at this example, turns out that without fixing the documentation of PartialOrd, we don't even get that PartialOrd + Eq is a preorder (or "intuitively a partial order, if we take Eq as equality").


1Going through the axioms: The definition obviously defines an equivalence relation that identifies A and B. (== is reflexive, transitive, symmetric.) The axioms listed as "ensured by the default implementation" on PartialOrd's docs are also fulfilled, and "a == b if and only if `partial_cmp(a, b) == Some(Equal)`` is true by definition.

This leaves:

The comparison must satisfy, for all a , b and c :

  • transitivity: a < b and b < c implies a < c . The same must hold for both == and > .
  • duality: a < b if and only if b > a .

This transitivity is true. The only instance of ... < ... is B < C, so there's no way to fulfill the precondition a < b and b < c for some a, b, c, in the first place. And duality is also true, since the only instance of ... > ... is C > B, which corresponds to the only instance of ... < ... being B < C

The map is not the territory. Here's an implementation consistent with the requirements as I read them.

struct Snowflake(i32);

impl PartialEq<Self> for Snowflake {
    fn eq(&self, _: &Self) -> bool {
        false
    }
}

impl PartialOrd<Self> for Snowflake {
    fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> {
        match self.0.cmp(&other.0) {
            std::cmp::Ordering::Equal => None,
            ne => Some(ne),
        }
    }
}

For Snowflake, <= is a strict partial order (it's the same as <), not a non-strict one.


Yeah, that's why. I think I saw the specific bug in passing when looking it up the stuff below, but didn't note it. There was no split until 2013 or 2014 or such.

I thought maybe the fact that PartialOrd<_> (and PartialEq<_>) can also compare different types might have something to do with it. But no. Turns out that there used to be a different trait (Equiv) for cross-type equality, but they got merged together. Before that, none of the comparison traits were cross-type.

Interestingly, the motivator for Equiv didn't need it anymore (replaced by Borrow I guess), but the generic partial comparison traits got some takers in the standard lib. Also interesting is that Eq and Ord were also briefly made generic, but that got dropped in the stabilization PR. The PR says that the traits match the reform RFC but that's not true (since the type parameter was dropped from Eq and Ord), if anyone wants to take it up :wink:.

A related todo is still open and just got pinged by Mara, which is amusing timing.

1 Like

What is an example of a practical problem that would be caused if Eq was automatically implemented whenever PartialEq<Self> was implemented? I don't really see any practical reason to separate these.

Reading on the requirements, PartialEq<Self> already requires symmetry and transitivity. Eq adds a requirement for reflexivity. But reflexivity of a relation R can be written as:

a = b => R(a, b)

Which, for equality, is a tautology:
a == b => a == b

As a practical example, I see that HashMap::get requires Eq for keys. But would any problem be caused if you used f64 for hash keys? I don't think so. Every NaN would simply be treated as a separate, unequal key, so every time you insert NaN you get another entry, and every time you look up NaN you don't find anything. So what? Seems fine to me, that's consistent with how equality works for NaN.

There is a problem with implementing Hash for NaN in a good way though, because really it should be different every time. But that's an issue separate from Eq.

Thank you very much, it makes things much easier to grasp.

I totally agree that <= not ending up transitive is odd. And I'm not sure if that was intended or not to allow this.

However, I would like to give a counter-argument why it could (maybe) (perhaps) make sense in some cases to demand that < and > describe a partial order while <= and >= aren't transitive.

(I hope I don't make any logic error here.)

Whenever we implement PartialEq without Eq, this means our == operator doesn't mean "equality". It means == is some other weird "partial equivalence relation". I also showed that PartialOrd gives us a strict partial order regarding < and >.

Now let's consider a new floating point type where:

  • There are positive and negative finite numbers following comparison rules (regarding ==, < and <=) as usual (e.g. −2.3 < −1.0 <= −1.0 < 6.2).
  • There is also −∞ and +∞. Contrary to IEEE 754, we want −∞ and +∞ to be equivalent in regard to == (as it can make sense to treat these equally / as equivalent, as for example done on the Riemann Sphere in mathematics or in the context of negative (Kelvin) temperatures).
  • Yet we might want to have a partial order of some kind (using < and >).

I could imagine a complex number type where we want to include −∞ and +∞ as sort-of-limit for some calculations such as 1.0 / 0.0, but treat −∞ == +∞, yet −∞ < 0 < +∞. This doesn't sound more "odd" what IEEE 754 does.

We might still define a partial order on such type, e.g. where < may only return true if the imaginary part is zero.

(Side note: I really dislike that IEEE 754 distinguishes between −∞ and +∞, and even between −0 and +0, despite −0 == +0.)

Let's say

  • A represents +∞
  • B represents −∞
  • C represents zero.

Then we get:

  • +∞ == −∞ is true (like on the Riemann Sphere)
  • −∞ < 0 is true
  • +∞ < 0 is false
  • +∞ <= −∞ is true (because +∞ is either smaller than or equivalent to −∞)
  • −∞ <= 0 is true (because −∞ is either smaller than or equivalent to 0)
  • +∞ <= 0 is false (because +∞ is neither smaller than nor equivalent to 0)

This doesn't seem more crazy (to me) than what IEEE 754 does. What do you think?

I would think of x <= y simply as a short-hand for x < y || x == y, and not more and not less. I don't think we should impose any semantics on <= but keep it strictly formal. Following that, PartialOrd describes a partial order (regarding < and >) and PartialEq describes a partial equivalence relation (regarding == and !=).

P.S.: Of course, PartialOrd and PartialEq are somewhat intertangled, because PartialEq is a supertrait of PartialOrd, but I consider this merely as an implementation detail (it allows to provide optimized implementations for PartialOrd::le, for example).

From a developer perspective: I have arrays of floats, and I need them sorted. I don't care what the proper math says, I have floats, not real numbers. Rust's sound and tight definitions are not helping me when I need to sort floats, and I can't.

The current state simply prevents me from getting my job done. I need to put extra work to do exactly the "bad" thing that Rust is trying to stop me from doing. I use sort_by(partial_cmp().unwrap_or(Equal)) or wrap floats in a newtype that intentionally incorrectly implements Ord just so that I can get floats sorted.

Could we have sort() functions that use a trait like Comparable and work on anything that supports <, while accepting and permitting all of the consequences of this comparison being inconsistent?

2 Likes

I have had the same problem. I always end up with something like:

/// Compare two [`f64`] numbers while treating [NaN](f64::NAN) as greatest.
pub fn cmp_f64_nan_greatest(a: f64, b: f64) -> std::cmp::Ordering {
    a.partial_cmp(&b).unwrap_or_else(|| {
        a.is_nan().cmp(&b.is_nan()).then(std::cmp::Ordering::Equal)
    })
}

And then using that in a closure passed to sort_by.

It's annoying. I agree.

That I don't do. I'm always worried NaN's could actually appear.

How about partial_sort, which is happy with PartialOrd? (Maybe I shouldn't have made a new thread, will be more careful next time, as these topics are kinda related.)

P.S.: I just gave an outline of an example in the other thread, as this is related to the original question: "Why sort() need T to be Ord ?"

1 Like

I agree. I have dealt with floating point professionally a lot, and have never seen any good coming out of the weird NaN comparison rules, it's always seen as an unpleasant nuisance.

There are proposals that fix many issues with IEEE-754, such as posits. They have no -0.0, no NaNs at all, better handling of denormals / precision, and only one ∞ (I'm not sure about that last one, I kind of like -∞, +∞ as initializers for minimum / maximum).

1 Like

I don't agree here, because in C I sometimes rely on NaN not being greater or smaller than any other number, for example.

Like when I write (in C):

if !(a > 0.0 && a < 1.0) error();

I think similar would apply in Rust. But you need to be careful to not "simplify" that to:

if (a <= 0.0 || a >= 1.0) error();

Both of these would call error if NaN was smaller than all other numbers. Wouldn't that be better than the current behavior, where the first one calls error but the second one doesn't?

If NaN is always smaller than any number, then this doesn't work:

if (a < 0.0) { /* negative */ }
else if (a >= 0.0) { /* non-negative */ }
else { /* something bad happened! */ }

If f64::NAN < 0.0, then a < 0.0 would be true (which is somewhat odd).

I have worked with systems where NaN is "sorted" in some way (I don't remember where, maybe it was SQL?), and it didn't feel good.

Regarding workarounds for floats, there's an obligatory mention of

missing in this thread.

The crate supports two legitimate approaches with wrapper-types: Either define an order for NaN (in this case arbitrarily, as the greatest element), or disallow NaN entirely (which is checked when constructing the wrapper type).

2 Likes