Why does the standard library not define traits for some of the mathematical operations available on `f64`

, `f32`

, `i32`

, …

I was trying to make a simple generic statistics trait for learning puproses

```
pub trait Statistics<Output> {
fn mean(&self) -> Output;
fn variance(&self) -> Output;
fn standard_deviation(&self) -> Output;
}
pub struct Signal<T> {
samples: Vec<T>,
sum: T,
sum_squared: T,
}
impl<T> Statistics<T> for Signal<T> where T: Add + Sub + Div + Sub<Output=T> + Div<i32, Output=T> {
fn mean(&self) -> T {
self.sum / self.samples.len() as i32
}
fn variance(&self) -> T {
let n = self.samples.len() as i32;
(self.sum_squared - self.sum.powi(2) / n) / n
}
fn standard_deviation(&self) -> T {
self.variance().sqrt()
}
}
```

And having a trait for the square root would have been very helpful here. I know I can define the trait myself and wrap the types that already implement `sqrt`

in it. Like they did in this crate

But I wonder why this is not defined in the standard library, it’s seems like it would be a useful thing to have. Is there a technical / historical reason for why it’s not like this?