How good is the compiler at optimizing away dynamic dispatch that picks exactly the same function each time around a loop?
I have code which injects a function into an inner loop, where that single function will be called repeatedly on items of a huge data set. Exactly what that function does, depends on a number of configuration parameters.
The need for new configuration parameters is constantly emerging, and the chain or tree or web of logic which combines the parameters into the definition of an appropriate function is becoming more and more complex, and involves more steps. Initially I was passing
impl Fn* along the stack towards the inner loop, but this is leading me to a proliferation of generic functions with type parameters whose only purpose is to encode (at compile-time) some aspect of the configuration parameter set, all for the sake of preserving static dispatch.
The code would be much simpler if some variety of
dyn Fn* were passed around instead, but before I start rewriting it all, I want to be at least slightly confident that there won't be a meaningful performance degradation.
I have two concerns
Some of these functions do trivial amounts of work per iteration, hence my concern that injecting the cost of dynamic dispatch might be significant. On the other hand, in a single run, whatever function runs on the first iteration will also be called on all subsequent iterations, so I hope the compiler can figure this out and not waste time on dispatching over and over again.
At the other end of the complexity spectrum, some of these functions are composed of chains or trees of closures calling closures. I fear that turning all these from statically into dynamically dispatching ones, might prevent the compiler from inlining the components that make up the function or doing some whole-program optimizations and thus result in significant performance degradation.
What sorts of things should I bear in mind when considering replacing the
impls in my code with
Should I consider some other approach (enums,