Speeding up or finding alternative to BTreeSet

I've ported a branch and bound search from Python that uses a lot of set-theoretic operations on FxHashSet<BTreeSet<String>> collections and their BTreeSet<String> members. I'm using BTreeSet because it can be used in a FxHashSet (FxHashSet itself, not being sorted, can't be nested).

However while FxHashSet gave me a good speedup over the std HashSet, it seems like BTreeSets hashbrown hasher has a lot of overhead and is lagging the Python version (which uses sets of frozensets) by around 33 % for the same number of search steps (compiled in release mode, codegen-units = 1, profiling shows the program is spending most of its time creating BTreeSets). Is it possible to use FNV or FxHash with a BTreeSet? Is there anything else I can try?

Could you redesign your structure to cache hashes?

Also, can you say a little more about the statistics of your structures? How many strings are there? What is the typical cardinality of a set?

Perhaps using a sorted Vec<String> instead of BTreeSet<String> would be appropriate?

I've implemented a HashMap that can be hashed. It makes key order irrelevant by xoring hashes of the entries.

1 Like

@user16251 ~120 strings, cardinality varies between 2 and ~120. I haven't looked too closely at the distribution, it looks uniform-ish at first glance.

@alice I don't think I can use a Vec<String> for the members because I need the set-theoretic ops that BTreeSet provides.

@kornel Hmm. Interesting.

In the interest of transparency, here's my current impl. It is…not very well optimised, but then neither is the original.

/// What is the set of n ingredients that will allow you to make the highest number of different cocktails?
/// E.g.:
/// You have 117 different ingredients
/// You have 104 cocktails, each of which use 2-6 ingredients
/// Which 5 ingredients maximize the cocktail-making possibilities? What about 10 ingredients?
/// Here's a branch and bound solution
/// IngredientSet looks like:
/// `{"Lemon", "Gin", "Coffee liqueur", "Vodka", "Dry vermouth", "Olive", "Lillet blanc"}`
/// ` {"Gin", "Triple sec", "Cognac", "White rum", "Champagne", "Simple syrup", "Lemon juice"}`
/// `{"Lemon", "Coffee liqueur", "Dry vermouth", "Vodka", "Olive", "Cream", "Gin"}`
use rand::rngs::ThreadRng;
use rand::seq::IteratorRandom;
use rustc_hash::FxHashSet;
use std::collections::BTreeSet;

pub type Ingredient = String;
pub type IngredientSet = BTreeSet<Ingredient>;

pub struct BranchBound {
    pub calls: i32,
    pub max_size: usize,
    pub highest_score: usize,
    pub highest: FxHashSet<IngredientSet>,
    pub random: ThreadRng,
    pub counter: u32,

impl BranchBound {
    pub fn new(max_calls: i32, max_size: usize) -> BranchBound {
        BranchBound {
            calls: max_calls,
            highest_score: 0usize,
            highest: FxHashSet::default(),
            random: rand::thread_rng(),
            counter: 0,

    pub fn search(
        &mut self,
        candidates: &mut FxHashSet<IngredientSet>,
        partial: &mut FxHashSet<IngredientSet>,
    ) -> FxHashSet<IngredientSet> {
        self.counter += 1;
        if self.calls <= 0 {
            println!("{:?}", "Early return!");
            return self.highest.clone();
        self.calls -= 1;
        let score = partial.len();
        if score > self.highest_score {
            self.highest = partial.clone();
            self.highest_score = score;

        // what cocktails could be added without blowing our ingredient budget?
        let partial_ingredients = partial

        // if adding all the associated ingredients of the candidates
        // takes us over the ingredient budget, then not all the
        // candidates can feasibly be added to our partial
        // solution. So, if there will be excess ingredients we'll
        // reduce the upper bound of how many cocktails we might be
        // able to cover (possible_increment)
        candidates.retain(|cocktail| (cocktail | &partial_ingredients).len() <= self.max_size);

        let mut possible_increment = candidates.len();

        let candidate_ingredients = candidates
        let mut excess_ingredients =
            (&candidate_ingredients | &partial_ingredients).len() as i32 - self.max_size as i32;

        // best case is that excess ingredients are concentrated in
        // some cocktails. If we're in this best case, removing
        // the cocktails that add the most new ingredients
        // brings us back under the ingredient budget
        // note that we're just updating the bound; it could actually
        // be that we want to add one of these cocktails that add
        // a lot of ingredients
        if excess_ingredients > 0 {
            let mut ingredient_increases = candidates
                .map(|cocktail| (cocktail - &partial_ingredients).len() as i32)
            ingredient_increases.sort_by(|a, b| b.cmp(a));
            for increase in ingredient_increases {
                possible_increment -= 1;
                excess_ingredients -= increase;
                if excess_ingredients <= 0 {

        let threshold = self.highest_score - score;

        if !candidates.is_empty() && possible_increment > threshold {
            // random choice seems to be the best heuristic according to the original author
            let best = candidates.iter().choose(&mut self.random).unwrap().clone();

            let new_partial_ingredients = &partial_ingredients | &best;
            let covered_candidates = candidates
                .filter(|cocktail| cocktail.is_subset(&new_partial_ingredients))

                &mut (&*candidates - &covered_candidates),
                &mut (&*partial | &covered_candidates),

            // if a cocktail is not part of the optimum set,
            // the optimum set cannot have the cocktail as a subset
            candidates.retain(|cocktail| !best.is_subset(&(cocktail | &partial_ingredients)));
            self.search(candidates, partial);

You could implement them yourself with binary search.


Are there 120 strings in total? I'd represent the sets with bitsets instead. You can just use two u64s in that case rather than a BTreeSet. Most set operations can just be done with integer arithmetic then and there is much less overhead. Of course this is much less general.


(Or one u128.)


I may be misunderstanding you. The outer set (FxHashSet) contains groups of ingredients (BTreeSets), each ingredient a String. These inner sets are decomposed and used to form new BTreeSets as the search runs (recursively) – this is what’s expensive, and I need the inner Strings again when the search finishes. How are you proposing to go from container of Strings -> BitSets -> new BitSets -> result (container of Strings). A Hashmap for lookups between String and BitSet? Could you sketch it out?

I guess the bitset depends on whether there are 120 unique strings in total among all the sets, or whether the sets have mostly unique strings, but only around 120 in each.

I'm not sure I fully understand the problem but I'd maintain two values to represent the universe of ingredients: index_of_ingredient: HashMap<String, usize> and ingredient: Vec<String>. You only ever work with strings in input or output. To actually make a singleton set out of a single ingredient you'd write 1u128 << index_of_ingredient[&ingredient]. Union is bitwise or, intersection is bitwise and, complement is bitwise not, cardinality is popcount, etc. This only works if the total number of ingredients is less than 128. But bitsets should be competitive for larger sizes as well; they just become a little harder to work with.


It's the former: there are 117 unique ingredients, which can be combined to form 104 unique cocktails. For the purposes of this problem, I think all the complexity and computational overhead lies in the way the branch and bound search can re-combine the ingredients into sets.

Gotcha – thanks for the explanation. As I mentioned in my previous reply, I think all the computational complexity lies in the fact that ingredients can be re-combined into sets as the search runs, and it's this re-combination – which currently involves forming a BTreeSet – which is suffering from the relatively slow speed of hashbrown compared to the Python frozenset hashing algorithm.

But in this case (fixed universe of cardinality ≤ 128), what's wrong with the bitset suggestion? It is trivial, even when properly designed using a newtype.


I don't think it's wrong! But your playground example isn't quite right – again, unless I'm misunderstanding. An IngredientSetcurrently contains e.g. {"Whiskey", "Coke", "Lime"}, but your version seems to contain a single ingredient, going by the example.

The example includes a whiskey coke :tumbler_glass:


My playground example unions two singleton sets of {Whiskey} and {Coke} to yield a {Whiskey, Coke}.

1 Like

Sorry, I apparently can't read today.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.