I have a case where I need to specialize a conversion trait so that byte sequences are handled specially.
For sake of this example, everything except byte sequences can just be converted to string. (In reality this is an experiment for PyO3 and the output is Python objects.)
The conversion trait can look something like this:
#[derive(Debug)]
enum AnyOrBytes {
Any(String),
Bytes(Vec<u8>),
}
trait MyConversion: Sized {
fn convert(self) -> AnyOrBytes;
}
This works fine for the base case and can be implemented for whatever types are desired. However, we have a generic implementation for Vec<T>
which we want to specialize for Vec<u8>
(and the same for &[T]
/ &[u8]
).
We have a solution which compiles by adding a defaulted method convert_sequence
to the MyConversion
trait, and letting the u8
implementation override it.
trait MyConversion: Sized {
fn convert(self) -> AnyOrBytes;
fn convert_sequence<S>(seq: S) -> AnyOrBytes where S: IntoIterator<Item = Self> {
AnyOrBytes::Any(format!("{:?}", seq.into_iter().map(Self::convert).collect::<Vec<_>>()))
}
}
// sequence implementations for u8 produce byte vecs
impl MyConversion for u8 {
fn convert(self) -> AnyOrBytes {
AnyOrBytes::Any(self.to_string())
}
fn convert_sequence<S>(seq: S) -> AnyOrBytes where S: IntoIterator<Item = Self> {
AnyOrBytes::Bytes(seq.into_iter().collect())
}
}
This works, but it's reliant on going through an iterator implementation rather than doing something more efficient. Looking at the godbolt output, it seems that the compiler isn't able to optimise away the iteration. I wonder if anyone has any ideas how this can be improved?
Full playground at: