Hi,
So I'm wondering if it makes more sense to create one big enum with all variants at one level, or one enum with the categories of the data and "sub enums" as the value of enum variants.
So basically:
enum C10 {A, B, C, D, E, F, G, H, I, J}
vs.
enum C5x2 {
A(C5x2A),
B(C5x2B),
C(C5x2C),
D(C5x2D),
E(C5x2E),
}
enum C5x2A {A, B}
enum C5x2B {A, B}
enum C5x2C {A, B}
enum C5x2D {A, B}
enum C5x2E {A, B}
I expected the second approach to be faster, but in my benchmark there basically is no difference:
match/combined_10 time: [1.3216 ns 1.3698 ns 1.4248 ns]
Found 7 outliers among 100 measurements (7.00%)
5 (5.00%) high mild
2 (2.00%) high severe
match/splitted_5x2 time: [1.3283 ns 1.3436 ns 1.3608 ns]
Found 2 outliers among 100 measurements (2.00%)
2 (2.00%) high mild
I'm now wondering if my benchmark code is correct (which you can see here).
Is there maybe only a difference with a higher number of variants?
In my actual code, I will have more than 150 variants, when combined in one enum. Should I create a benchmark with more variants?
Also: I also have to convert a text representation of these variants to the corresponding variant – can I here be sure that the second approach would be faster (because the compiler can't optimize string matching as much as enum variant matching), or should I create a benchmark for this as well?