Optimizing the Hash Function for a Simple Enum

I'm wanting to optimize the hash function for this enum:

#[derive(Debug, Clone)]
pub enum ComponentId {
    RustTypeId(std::any::TypeId),
    ExternalId(u64),
}

Here's my first attempt:

impl Hash for ComponentId {
    fn hash<H: Hasher>(&self, state: &mut H) {
        match self {
            ComponentId::RustTypeId(id) => {
                id.hash(state);
                // Write a byte to distinguish the variants
                state.write_u8(0);
            }
            ComponentId::ExternalId(id) => {
                state.write_u64(*id);
                // Write a byte to distinguish the variants
                state.write_u8(1);
            }
        }
    }
}

It seems a little faster that just derive-ing Hash according to benchmarking it with criterion:

hash rust id            time:   [7.2598 ns 7.3019 ns 7.3501 ns]                          
                        change: [-7.3700% -6.4855% -5.5781%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
  4 (4.00%) high mild
  4 (4.00%) high severe

hash external id        time:   [7.2684 ns 7.2878 ns 7.3092 ns]                              
                        change: [-9.0916% -8.1682% -7.3739%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 6 outliers among 100 measurements (6.00%)
  3 (3.00%) high mild
  3 (3.00%) high severe

With this benchmark code:

fn criterion_benchmark(c: &mut Criterion) {
    let rust_id = hashbench::ComponentId::RustTypeId(std::any::TypeId::of::<u32>());
    let external_id = hashbench::ComponentId::ExternalId(63);

    c.bench_function("hash rust id", |b| {
        b.iter(|| {
            let mut h = DefaultHasher::default();
            black_box(&rust_id).hash(&mut h);
            h.finish();
        })
    });

    c.bench_function("hash external id", |b| {
        b.iter(|| {
            let mut h = DefaultHasher::default();
            black_box(&external_id).hash(&mut h);
            h.finish();
        })
    });
}

Is there a more efficient way to implement this hash function?

1 Like

The reqirements of Hash state that if two values are Eq they must produduce the same hash, but unequal values are allowed to collide and have the same hash. That means this is a valid implementation as well:

impl Hash for ComponentId {
    fn hash<H: Hasher>(&self, state: &mut H) { }
}

Because it’s a no-op, it will be much more performant when calculating hashes. It’s a terrible idea, though, because anything using it will need to fallback to using Eq instead.

Though this is an extreme example, it points to both a flaw in your testing strategy and a potential avenue for performance gain: Some fields can be ignored entirely if they’re unlikely to prevent a colission, but you need to track how the different hashing strategies affect the algorithms you’re trying to use.

3 Likes

In my case the ComponentId is used as a unique lookup in many hashmaps inside of an ECS. Being that the use-case in this instance is almost exclusively ( I'm pretty sure ) to use the hashes for lookups in HashMaps I imagine that the goal would be to always produce unique hashes to avoid ever having to fall back to Eq.

It may be that you can skip putting the discriminant in the hash, though. TypeId is already a hash of sorts, so it’s unlikely to conflict with any of the external ids.

Hmm, the TypeId is just a u64, though src:

pub struct TypeId {
    t: u64,
}

So It's hash function would just boil down to state.write_u64(t), which is essentially the same as the hash implementation of the hash for ComponentId::ExternalId, which just writes the u64.

I guess just because of the very large value of u64::MAX the collision chance would be slim, though. I guess the question then is whether or not the collision chance is low enough that the performance gained by saving the hash of 1 byte is worth it. That hash is probably one of the hottest parts of the code, so it does seem like it could be worth getting any little bit extra performance savings out of it.

Skipping the byte does speed it up a lot more than I thought it would so that seems worth it:

hash rust id            time:   [5.1949 ns 5.2009 ns 5.2076 ns]                          
                        change: [-25.384% -24.028% -22.858%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 11 outliers among 100 measurements (11.00%)
  1 (1.00%) low severe
  1 (1.00%) low mild
  2 (2.00%) high mild
  7 (7.00%) high severe

hash external id        time:   [4.4540 ns 4.4599 ns 4.4667 ns]                              
                        change: [-38.417% -37.437% -36.456%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 10 outliers among 100 measurements (10.00%)
  1 (1.00%) low mild
  1 (1.00%) high mild
  8 (8.00%) high severe

Next, it's probably not worth it, but I wonder if there is a way to use a union or an unsafe cast or something similar to access the internal u64 without having to do a match to avoid the branching.

I suspect the optimizer is already doing this after you removed the extra byte— It’s pretty good at unifying branches that are equivalent.

3 Likes

Use a different Hasher than the default provided by Rust's standard library, unless you need the additional security benefits.

3 Likes

Oh, sweet. That could explain the greater performance savings, too, maybe.

Yeah, not worth stepping into unsafe just to do what the compiler is doing for me.


Hey and removing the discriminant brings down the CPU instructions for my benchmarks in my PR:


It appears that the ECS is currently using ahash:

It looks like ahash is still designed to be DOS resistant so maybe rustc-hash is even faster?

Edit: Ah, from the ahash docs it looks like it says that ahash is like a DOS resistant alternative of the rustc-hash.

1 Like

If you can work with 63 bit, instead of 64 bit, this topic might be of interest to you:

I also had to deal with an enum for which I wanted to remove the space overhead of the tag, in the past.

The provided solution will enable you to avoid the branch for hashing purposes, but will cost you more time to create and read the value, because it has to transform the space-efficient enum into a proper one and vice-versa.

P.S.: It doesn't map 1:1 to what you need, but it should be close enough to guide you to an individual solution.

2 Likes

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.