I'm not convinced that your C++ example manages that: It builds a linked list in the inheritance tree, and then each call has to walk through O(N) superclasses before it finds the one that implemented the method you're looking for. Assuming that you're calling each of the methods, that brings us back to O(N²).
The other possibility is that the compiler is keeping something like a hashmap of all the methods defined on each intermediate class. In that case, it needs to make a copy of the map to modify for each subclass, which there are O(N) of. Each map needs to have capacity for O(N) defined methods, which brings us to O(N²) again.
It's possible to do something similar in Rust with an HList approach, but I don't have time right now to write up an example; maybe later.
No, that brings us to O(N+M²). Where N is number of potential components that we may want to use and M the number of components actually used.
That difference is significant and it's the most important difference between C++ approach and Rust approach.
It's not hard to create “forward path”, @H2CO3did that here. Problems happen on “backward path”: how to ensure that get_foo or get_bar work only if builder have foo or bar?
One way which I can envision is to use [const-based trick from Reddit thread]( Something like this ): implement get_foo and get_bar for all types but organize panic in the instantation of get_foo/get_bar if type doesn't actually have foo or bar.
This should work, I guess, but if that's the only way then it just reinforces my complaints about problems with generics: the fact that const expression are detecting problems during instantiaon phase and not during definition phase turns them into poor-man-templates!
And yeah, poor-man-templates may achieve what actual templates may do, only in a somewhat roundabout way… I guess that's a “good enough solution”, but it doesn't show us advantage of generics at all.
Old compiler's do that very inefficiently while modern compilers employ many tricks to make things more efficient. But that's moot point: you still have to move all the data O(M²) times.
If we have bazillion potentially usable services but each consumer needs only handful of these then we want complexity O(N+M²), not O(N²). If we want something more efficient than O(N+M²) then it would be better, but the primary difference between templates in O(N²) vs O(N) issues where templates only need to process actual combinations that are actually used and with generics, if you don't use poor-man's-template-via-const trick, you end up with O(N²). Can it be avoided, somehow?