BTreeMap::range with keys containing TypeIds

I have a BTreeMap with a key composed of a primary usize index, a TypeId, and a secondary index, ordered like this : (usize, TypeId, usize) -> T. In my program I need to do two kinds of queries on this map: one where I look for all items with a specific primary index, and another where I look for all items with a specific primary index and TypeId.
For the first query, I was thinking about using BTreeMap::range() but I don't know how to actually specify the range. Intuitively I wanted to do something like range((index, TypeId::MIN, 0)..(index+1, TypeId::MIN, 0)) , but there's no TypeId::MIN, and there's no way to turn a TypeId into an integer.
One workaround that I found was to change my keys to (usize, Option<TypeId>, usize) so that (index, None, 0) can be used as a lower bound of sorts, but that's suboptimal as it grows the size of the keys from 32 to 40 bytes.

Is there a way to do what I want with BTreeMap::range without changing the key type?

It's kind of messy to do, but you can use the generalizing borrows via dyn Trait trick to make this work:

use std::ops::Bound;
use std::ops::RangeBounds;
use std::borrow::Borrow;
use std::any::TypeId;
use std::collections::BTreeMap;
use std::cmp::Ordering;

#[derive(Copy,Clone,Eq,PartialEq,Ord,PartialOrd)]
pub enum Scale<T> {
    OffScaleLow,
    Value(T),
    OffScaleHigh
}

trait CmpIndex {
    fn cmp_key(&self)->(Scale<usize>, Scale<TypeId>, Scale<usize>);
}

impl CmpIndex for (usize,TypeId,usize) {
    fn cmp_key(&self)->(Scale<usize>, Scale<TypeId>, Scale<usize>) {
        (Scale::Value(self.0), Scale::Value(self.1), Scale::Value(self.2))
    }
}

impl CmpIndex for (Scale<usize>,Scale<TypeId>,Scale<usize>) {
    fn cmp_key(&self)->(Scale<usize>, Scale<TypeId>, Scale<usize>) {
        *self
    }
}

impl PartialEq for dyn CmpIndex+'_ {
    fn eq(&self, other:&Self)->bool {
        self.cmp_key().eq(&other.cmp_key())
    }
}

impl Eq for dyn CmpIndex+'_ {}

impl PartialOrd for dyn CmpIndex+'_ {
    fn partial_cmp(&self, other:&Self)->Option<Ordering> {
        self.cmp_key().partial_cmp(&other.cmp_key())
    }
}

impl Ord for dyn CmpIndex+'_ {
    fn cmp(&self, other:&Self)->Ordering {
        self.cmp_key().cmp(&other.cmp_key())
    }
}

impl<'a> Borrow<dyn CmpIndex+'a> for (usize, TypeId, usize) {
    fn borrow(&self)->&(dyn CmpIndex+'a) { self }
}

impl<'a, T:CmpIndex+'a> RangeBounds<dyn CmpIndex+'a> for (Bound<T>, Bound<T>) {
    fn start_bound(&self) -> Bound<&(dyn CmpIndex+'a)> {
        self.0.as_ref().map(|b| b as &dyn CmpIndex)
    }

    fn end_bound(&self) -> Bound<&(dyn CmpIndex+'a)> {
        self.1.as_ref().map(|b| b as &dyn CmpIndex)
    }
}

pub fn select_primary<T>(map: &BTreeMap<(usize,TypeId,usize), T>, i:usize)
    ->std::collections::btree_map::Range<'_, (usize,TypeId,usize), T> {
    use Scale::*;

    map.range::<dyn CmpIndex,_>((
        Bound::Included((Value(i), OffScaleLow, OffScaleLow)),
        Bound::Included((Value(i), OffScaleHigh, OffScaleHigh))
    ))
}

pub fn select_primary_and_type<T>(
    map: &BTreeMap<(usize,TypeId,usize), T>,
    i:usize,
    ty: TypeId
) -> std::collections::btree_map::Range<'_, (usize,TypeId,usize), T> {
    use Scale::*;

    map.range::<dyn CmpIndex,_>((
        Bound::Included((Value(i), Value(ty), OffScaleLow)),
        Bound::Included((Value(i), Value(ty), OffScaleHigh))
    ))
}
3 Likes

Alternatively, you can get a u64 with a custom hasher

fn type_id_to_int(type_id: TypeId) -> u64 {
    struct HackHasher {
        len: usize,
        bytes: [u8; 8]
    }
    impl Hasher for HackHasher {
        fn finish(&self) -> u64 {
            u64::from_le_bytes(self.bytes)
        }
        fn write(&mut self, data: &[u8]) {
            let out = &mut self.bytes[self.len.min(8)..];
            let to_write = data.len().min(out.len());
            out[..to_write].copy_from_slice(&data[..to_write]);
            self.len += to_write;
        }
    }
    let mut hasher = HackHasher {
        len: 0,
        bytes: [0; 8]
    };
    type_id.hash(&mut hasher);
    hasher.finish()
}