Why does the standard library not define traits for some of the mathematical operations available on f64, f32, i32, ...
I was trying to make a simple generic statistics trait for learning puproses
pub trait Statistics<Output> {
fn mean(&self) -> Output;
fn variance(&self) -> Output;
fn standard_deviation(&self) -> Output;
}
pub struct Signal<T> {
samples: Vec<T>,
sum: T,
sum_squared: T,
}
impl<T> Statistics<T> for Signal<T> where T: Add + Sub + Div + Sub<Output=T> + Div<i32, Output=T> {
fn mean(&self) -> T {
self.sum / self.samples.len() as i32
}
fn variance(&self) -> T {
let n = self.samples.len() as i32;
(self.sum_squared - self.sum.powi(2) / n) / n
}
fn standard_deviation(&self) -> T {
self.variance().sqrt()
}
}
And having a trait for the square root would have been very helpful here. I know I can define the trait myself and wrap the types that already implement sqrt in it. Like they did in this crate
But I wonder why this is not defined in the standard library, it's seems like it would be a useful thing to have. Is there a technical / historical reason for why it's not like this?
I don't think that we really want to make a separate trait for every numeric method in the standard library - there are at least 40 of these methods on the floating point types alone! Add, Sub, Mul, Div, etc are a bit special since those traits are how you override the +, -, *, / operators.
There's design work that would need to be done to figure out what a robust trait hierarchy would look like for numeric types, and that would be something that happened in a third-party crate before landing in the standard library.
Some methods can be grouped together, like the trigonometric functions. Some would indeed be alone, but that is not uncommon in the standard library.
However by dividing it up in different traits you would gain a couple of things because:
They can be used as trait-bounds. People can implement the traits for their custom types and they will work with all the generic algorithms using them as trait bounds. And by being in the standard library you avoid everyone writing their own incompatible wrapper traits. You could also have generic output types so that it can be implemented for more types like complex numbers an matrices for example.
I guess the num crate tries to do this to some extent.