Possible Bug in binary_search_by


I think I’ve found a bug in binary_search_by for vectors: https://play.rust-lang.org/?gist=3ef112eb0a93e5cec0303c208415bed1&version=stable

Pretty simple code which should return that it found the value on the first inspection for both instances:

    let target = 3;
    println!("RET: {:?}", vec![1, 2, 3, 4].binary_search_by(|a| {
        println!("Inspecting: {}", *a);

    println!("RET: {:?}", vec![1, 2, 3, 4, 5].binary_search_by(|a| {
        println!("Inspecting: {}", *a);

However, it’s easy to see that there are a lot more inspections than are actually needed:

Inspecting: 3
Inspecting: 4
Inspecting: 3
RET: Ok(2)
Inspecting: 3
Inspecting: 4
Inspecting: 4
Inspecting: 3
RET: Ok(2)

What am I missing… or is this actually a bug?


The code in question is in the implementation of SliceExt for arrays. It looks like this:

    fn binary_search_by<'a, F>(&'a self, mut f: F) -> Result<usize, usize>
        where F: FnMut(&'a T) -> Ordering
        let s = self;
        let mut size = s.len();
        if size == 0 {
            return Err(0);
        let mut base = 0usize;
        while size > 1 {
            let half = size / 2;
            let mid = base + half;
            // mid is always in [0, size), that means mid is >= 0 and < size.
            // mid >= 0: by definition
            // mid < size: mid = size / 2 + size / 4 + size / 8 ...
            let cmp = f(unsafe { s.get_unchecked(mid) });
            base = if cmp == Greater { base } else { mid };
            size -= half;
        // base is always in [0, size) because base <= mid.
        let cmp = f(unsafe { s.get_unchecked(base) });
        if cmp == Equal { Ok(base) } else { Err(base + (cmp == Less) as usize) }

What you see is the while size > 1 loop – it will keep iterating all log(n) times, even if it finds a match right away. I don’t think this is a bug – the correct result is computed.

If if’t not a bug, then why does the code look like this? It is likely an optimization: by avoiding an extra if cmp == Eq(...) branch inside the loop, the loop has fewer instructions and fewer branches. The branch where you find a match will be very rare – for a typical large array, almost all comparison will not be a match, and so it makes sense not to do the comparison. Of course, this assumes that f is very cheap to compute.

That’s my best guess as to why the code looks the way it does. If this is indeed the reason behind it, it would be nice to have a comment explaining this so other don’t try to add an early return later.


This is the PR implementing the current algorithm, with some benchmark comparisons in the PR description: https://github.com/rust-lang/rust/pull/45333.


Very interesting, and unfortunate. My comparison function actually has to read from disk, and so using this “branchless” algorithm generates unnecessary work. I almost wonder if the binary_search_by function would be better served using the “old” algorithm that checks for Equal?


binary_search_by accepts an FnMut. Could you add some form of memoization to your comparator?

Vague idea:

let mut cache = HashMap::new();
slice.binary_search_by(move |elem| {
    cache.entry(elem).or_insert_with(|| {
        // expensive io


@ExpHP yes, I certainly can do something like that… but some of these values aren’t super-small, so storing them in memory is an unneeded drag. I just think it’s prematurely optimizing for ns when the method for binary_search_by more-often-than-not take ms.


eh, pretty much all of my usage for binary_search_by is on floats. (which don’t impl Ord)


binary_search_by is not really designed for comparators that do IO - the comparison is assumed to be infallible.


I’m a bit surprised that std baked the assumption of a cheap cmp into a “general purpose” method like that. It will undoubtedly catch more people by surprise.


Did the previous implementation “bake in” the assumption of an expensive cmp? Any implementation is going to have tradeoffs but I think it’s pretty reasonable to lean in the direction of optimizing for cheap comparators.


Previous implementation baked in what people have learned to expect from binary search implementations from previous experience in other languages/libs. I realize that all of these impl choices are within spec (or rather, unspecified) and therefore technically allowed. So this becomes a “quality of implementation” issue, even if the quality is better/optimal for some cases. If someone wants to optimize their binsearch, they can employ these tricks themselves or use some 3rd party crate or std should have dedicated functions to do this type of thing (I realize that causes an API explosion).

My $.02


Sort recently got an version explicitly for “my key computation is expensive”:

The discussion there hits a bunch of things seen here, like having the normal version be the one that works best for cheap things, with a new one for callers that want a different trade-off.

So maybe there’s space for another version of binary_search along the same lines.


@scottmcm, oh interesting! Do you think folks would consider having a binary_search_by_cached method added to SliceExt? I’m not sure I’m “the guy” to write a PR as I’m still fairly new to Rust.


I’d be more surprised that people have expensive comparisons to be honest. I’d usually expect cmp to compare two objects in memory and the operation shouldn’t be any more expensive than a linear scan through the data. Doing IO in a comparison feels kinda odd, especially considering you’d be unwrap()-ing any encountered errors.

It may be easier to add a Cached<T> wrapper type and put it in a crate instead of adding another binary_search_* method to SliceExt. That way people can still use it if they want to, without adding any maintenance load to the core developers.