trait A {
fn do_a(&self);
}
trait B {
fn do_b(&self);
}
trait AB: A + B {}
impl<T: A + B> AB for T {}
struct Foo;
impl A for Foo {...}
impl B for Foo {...}
struct Bar(Box<dyn AB>);
impl A for Bar {...}
impl B for Bar {...}
I have the code as in the first example, but want to refactor it as in the second. Because I prefer the code grouped by variants rather than operations.
Every Box represents a new heap allocation, and calling a method through a trait object requires a lookup from a virtual function table. These extra operations aren't usually too expensive in themselves, but they severely limit the optimizations that can be performed, because they hide a lot of the details from the optimizer.
In particular, it is almost never possible to inline a virtual method call, which has the potential to introduce unacceptable overhead in tight loops and other performance-critical code.
To be clear, while @2e71828 is correct in pointing out that virtual dispatch hampers optimisations, you will only really notice this in practice if you are making trait method calls super frequently... As in, tens of thousands of times per second frequently.
C++ and Rust developers tend to give static vs dynamic dispatch a disproportinate amount of air time, but in 99% of code you write the overhead from virtual dispatch will be dwarfed by bad algorithms, IO, and unnecessary copying of bulk data.