Pre-RFC: another Option combinator

This discussion continues on the Rust Internals Forum

Option combinator methods make working with Options much smoother, but I found a case which isn't covered by them.

It is when I have two Option<T>'s and I want to get an Option<T> that has a Some if either of the inputs is Some, and combine the values if both are Some. So, it would be similar to Option::or(), but when both inputs are Some, it would call a closure to determine the output.

It's easier to describe what I mean with a snippet of code:

fn merge_with<T, F>(a: Option<T>, b: Option<T>, f: F) -> Option<T> 
where F: FnOnce(T, T) -> T
{
    match (a, b) {
        (None,     None)     => None,
        (Some(v),  None)     => Some(v),
        (None,     Some(v))  => Some(v),
        (Some(v1), Some(v2)) => Some(f(v1, v2))
    }
}

It could be used to perform aggregation operations on Option-wrapped values without unwrapping them first.

I can imagine two kinds of operations being used with this:

  • An operation that selects one of its operands:

    let a = Some(2);
    let b = Some(5);
    
    let greater = a.merge_with(b, i32::max);
    
  • Or, an operation that somehow merges them:

    use core::ops::Add;
    
    let a = Some(2);
    let b = Some(5);
    
    let sum = a.merge_with(b, Add::add);
    

I see several use cases for this, but what do you think? Would it be useful? Would it be worthwhile to add this to std? I see that this is a minor feature, but so are many other combinators, and I think it would improve the ergonomics of working with Option-wrapped values.

Also, I couldn't come up with a better name than merge_with but I feel like it's very generic, so if you have a more descriptive name for this, don't keep it to yourself.

1 Like

Just some random musing: assuming (T, 𝟘, f) is a monoid, this is one of a couple of alternative ways to lift that monoid into Option<T>, where None is mapped to Some(𝟘). So a.merge_with(b, Add::add) can be written as a.unwrap_or(0) + b.unwrap_or(0) and similarly with max with its own neutral element (which is i32::MIN). I'm not sure if there are many useful uses for this operation if (T, f) is merely a semigroup. I wonder if there is an existing name for this operation/monoid instance eg. in Haskell.

EDIT: Oops, of course a.unwrap_or(0) + b.unwrap_or(0) does not have identical semantics to the proposed merge_with because it returns T (it does not send (None, None) to None but to 0).

1 Like

The effect of merge_with() can currently be done using iterator operations:

let greater = [a, b].into_iter().flatten().reduce(i32::max);

Of course, it is not as concise as a dedicated function, but it is at least straightforward and can be done without a complex match.

7 Likes

Haha, two people replying 1 minute before me with aspects, I’ve covered, too :slight_smile:


Reminds me of how Haskell’s Semigroup a => Monoid (Maybe a) instance. For those unfamiliar, translated into Rust, Haskell has something roughly like:

trait Semigroup: Sized {
    /// Implementations should fulfill the rule:
    /// `x.append(y).append(z) == x.append(y.append(z))`
    fn append(self, other: Self) -> Self;
}

trait Monoid: Semigroup {
    /// Implementations should fulfill the rules:
    /// `Self::empty().append(x) == x`
    /// `x == x.append(Self::empty())`
    fn empty() -> Self;

    /// Part of the trait so it can be more efficiently implemented, if possible
    fn concat<I>(iter: I) -> Self
    where
        I: Iterator<Item = Self>
    {
        iter.fold(Self::empty(), Self::append)
    }
}

in Rust, Sum and Product are a bit similar, but only have a method like concat, from which the other two could be restored though.

The Haskell syntax instance Semigroup a => Monoid (Maybe a) is like impl<T: Semigroup> Monoid for Option<T> in Rust.

impl<T: Semigroup> Monoid for Option<T> {
    fn empty() -> Self {
        None
    }
}

impl<T: Semigroup> Semigroup for Option<T> {
    fn append(self, other: Self) -> Self {
        match (self, other) {
            (None, b) => b,
            (a, None) => a,
            (Some(x), Some(y)) => Some(x.append(y)),
        }
    }
}

The idea behind this implementation is that combining many Option<_> values is supposed to treat Nones as neutral elements that are simply ignored.

Interestingly, in Rust, Sum and Product also have implementations for Option, but they operate quite differently: they treat None as an error, any of which will then make the overall Result None, too. I feel like this might indicate a slightly different default interpretation of the meaning of None in Rust, which is more consistently interpreted as some kind of “error”-like case, and not just neutrally as the absense of stuff. Well, at least, this is the case for .sum() and .product() of Item = Option<T> iterators… on the other hand, if you want absense of stuff for these cases, you wouldn’t need None in the first place and 0 or 1, respectively, would do fine. Whereas in Haskell, the Monoid trait is not implemented for numbers at all (since it’s only a single trait, and both addition and multiplication could work; but there are newtypes for each choice, so well…)


The task to fold an iterator ignoring None instead of treating it as an error of course also sometimes wants to be solved in Rust, and the way to do it would be to filter out the Nones first. So … well… something like [a, b].into_iter().filter_map(|x| x).reduce(f) does work, but it is relatively long (and also requires FnMut).

Edit: Uhh, @kpreid uses flatten() above; somehow didn’t think of that, since my thought process started with .filter(…), then going only to .filter_map(…) to make the .reduce(…) work.

2 Likes

FWIW, my thought process for my post was "Options are Iterators, so I can combine the items and reduce() them”. And I've gotten in the habit of using flatten() for a lot of things.

Oh hey, a.into_iter().chain(b).reduce(i32::max) would also work.

4 Likes

On second thought… well, I just remembered that the Monad implementation (which are kind of a big deal in Haskell) of Maybe a uses the “None Nothing is like an error” interpretation again, so it’s not like that interpretation is absent either.

By the way… I forgot :sweat_smile:, I also wanted to mention that Pre-RFC discussions usually make more sense in the internals.rust-lang.org forum; though so far, the answers didn’t go into “do we want this in Rust?” at all anyways, but just pointed out context, alternatives, and prior art anyways.

1 Like

Ok. I've been reading this forum for quite some time now but this is my first post, so I'm still in the process of figuring things out.

Should I create a thread over there?

Yes, feel free to create a thread. You can also mention that it’s been previously posted here (with a link back) so people who are interested can take a look at the comments here.

You can even modify your post if you like, and pick up the [a, b].into_iter().flatten().reduce(…) approach as an existing alternative worth mentioning.

Assuming you don’t want to do summarizations or quotations yourself on anything about the (kind-of/arguably/maybe) prior art in Haskell that I’ve mentioned and/or the mathematical interpretation w.r.t. monoids, then you can still just mention that comments on those topics were made here in this thread, if you like.

1 Like

I created another thread for this on the Rust Internals Forum.

1 Like

One thing that makes this operation a bit weird is that it's hard to extend.

The usual monadic lift is None-preserving. That's far more convenient, because it still makes sense for a three-argument function, where "keep the non-None thing" no longer makes any sense.

As an interesting bit of prior art, IEEE originally defined the minimum between floating point numbers like the merge_with you're proposing here, with it returning whichever is non-NAN. But in the latest standard, they added the usual lift where the minimum is NAN if either argument is NAN (only non-NAN if both arguments are non-NAN), as more consistent with everything else.

The generalization of "Keep the non-None" would be "apply f pairwise until reduced to the one-argument case" which is, of course, reduce, as others have remarked :slight_smile:

5 Likes

I can't help but agree with this, and it makes the type of the merging procedure overly restricted. For example, in SML I have used the following idiom (where oa is a value of 'a option, similary for ob and 'b option, and f is 'a -> 'b -> 'c):

Option.join (Option.map (fn a => Option.map (fn b => f a b) ob) oa)

(The join over map here explains why you need and_then + map in the linked thread; it's bind + map.)

The difference with the proposal is that if either input is NONE, so is the result (by necessity; how would we produce a 'c without both an 'a and a 'b?).

An anecdotal data point from my own experience: in statistical/machine learning applications, this tends to be annoying in practice when aggregating over values of a variable. Usually, NaN is the marker for missing data (i.e. it would be the equivalent of None), and when computing the min/max/average/whatever, it's usually best to ignore missing data (instead of filling it with arbitrary values such as 0).

Interestingly, there's no consensus in this even within the SciPy ecosystem. NumPy follows the same "monadic" convention, and thus np.max([1, 3, np.nan]) is NaN, while Pandas opts to implicitly filter out NaNs, resulting in pd.Series([1, 3, np.nan]).max() == 3. While the latter is what I want in the overwhelming majority of the time, an argument can be made for both. (And Rust may probably want to stick to its own current interpretation to prevent bad surprises, but the point is it's not always the best choice.)

1 Like

That may well be, but I still think it's weird that minworked so differently from everything else. After all, you included "average" in your list, but + doesn't just silently ignore NANs.

Reducing even just the sum needs to handle it, so I think it's perfectly reasonable for reducing the min/max to need to handle it too.

In the computation of the average, however, it's not only +. Ignoring a value in computing the average means that one also must not increment the counter that will become the denominator. Thus, it's not really about how binary + works – conceptually, the computation of an average (with this notion) is more than just adding. In this specific case, I would value practicality over uniformity, but as I said, I understand that there is an argument to be made for the converse as well.

I still think that thinking about this interms of monoids is what makes the most sense. If you are performing a binary operation where both of the values are lifted (By Option in this case), then the returned value should also be lifted since the identity element is None.

If the caller wants a different behaviour, then a different identity element should be provided for the operation, and for that we already have methods such as .reduce.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.