Efficiency of grabbing value from enumeration

If I have this code:

    enum Test {
        One(u32),
        Two(u32),
        Three(u32),
        Four(u32),
        Five(u32)
    }
    
    impl Test {
        pub fn val(&self) -> u32 {
            match *self {
                Test::One(val) => val,
                Test::Two(val) => val,
                Test::Three(val) => val,
                Test::Four(val) => val,
                Test::Five(val) => val
            }
        }
    }

Does anyone know if that results in a whole mess of conditionals? Or is this pattern detected by an optimizer durring compilation?

Or should I do something like:

    enum TestType {
        One,
        Two,
        Three,
        Four,
        Five
    }
    
    struct Test {
        test_type : TestType,
        value : u32
    }

This sounds like premature optimization. You should carefully profile code that is not behaving at the speed you need and find the things that are actually problems instead of worrying about random compiler optimization opportunities.

But, to answer your question, the compiler does detect that pattern. You can look at the disassembly of the function to verify that.

3 Likes

I tend to have a hard time determine if I'm doing a premature optimization....I mean if I need a FIFO (at least in JAVA) I know to NOT use an ArrayList because while you can remove the first item, the underlying implementation does a memory copy to shift the rest of the array down.

So I am using "insider" knowledge there...but couldn't it be argued that I'm doing a premature optimization too?

Also if I need a Map for an unspecified (possibly in the millions) number of elements, I tend to just use a BTree implementation from the start rather than risking the re-hash of a HashMap. Is that also premature optimization? Or would this just be knowing which library to use when?

Or how about organizing the code so that when I allocate Vec in Rust I know the expected size (if possible) so I can use with_capacity instead of just new....Would that be considered premature?

Where is the line? How is the line defined?

I would call this one premature, because there are trade-offs in both directions. You're trying to avoid a re-hash, which could be slow, but the benefit of O(1) lookups might dwarf that. Or maybe all of the code for either type of map takes so little time compared to your other work that it really doesn't matter. Until you have a real workload to measure, you can't be sure you're optimizing the right thing.

If you can easily figure out the expected size, then sure, reserve capacity to avoid realloc. But if you have to do contortions to get this, then I wouldn't worry about it until reallocs appear in a performance profile.

I don't think there's a clear line. It's more of a judgement call in the effort required to make those optimizations. Don't waste a lot of time thinking about performance of a particular area until you know it's a problem. Start by writing code that's good enough to get the job done, then measure to see where your performance intuitions were wrong.

Sometimes even the things you "know" will be wrong. An ArrayList is generally bad for a FIFO, sure, but if that FIFO happens to stay very small it might be fine, or even faster than other options.

1 Like

However with a hash map vs a b-tree or ArrayList vs. ArrayDeque vs. LinkedList, these decisions don't effect the readability of your code, so how do you decide which to use? With me it's about which I think will perform better in the problem i'm trying to solve.

With my question above I don't think using an emum vs. a struct with an enum would change the readability much....so I could use either....the only reason to choose one over the other would be a guess at which has a better chance of performing better....