I'm working with a combination of newtypes and generics to abstract over a couple different but interoperable numeric types, either owned or referenced, all of which implement common math ops. Apparently this is recommended practice for APIs (future proofing), though I'm using it more for strict type assurances while remaining flexible over a sealed set of properly implemented inner types, roughly:

```
use core::ops::Add;
struct ArgA<A: Real>(pub A);
struct ArgB<B: Real>(pub B);
struct Gives<C: Real>(pub C);
fn do_math<A: Real, B: Real>(a: ArgA<A>, b: ArgB<B>) -> Gives<<A as Add<B>>::Output>
where
A: Add<B>,
{
Gives(a.0 + b.0)
}
```

This is quite nice for math ops that have concrete `Output`

types, and it's easy enough to extend other ones to specify their `Output`

:

```
trait PowI {
type Output;
fn pow_i(&self, f: i32) -> Self::Output;
}
impl PowI for &'_ f64 {
type Output = f64;
fn pow_i(&self, f: i32) -> Self::Output { self.powi(f) }
}
```

But, using this pattern with larger equations becomes quite cumbersome. While the following works (and nicely allows abstracting over floats / nalgebra / ndarray / whatnot), it is quite definitely a fustercluck of trait bounds:

```
fn drag_force<Rho, U: PowI, CD, A>(
rho: Density<Rho>,
u: Velocity<U>,
c_d: Coefficient<CD>,
a: Area<A>) -> Force<<<<<f64 as Mul<Rho>>::Output as Mul<<U as PowI>::Output>>::Output as Mul<CD>>::Output as Mul<A>>::Output>
where
f64: Mul<Rho>,
<f64 as Mul<Rho>>::Output: Mul<<U as PowI>::Output>,
<<f64 as Mul<Rho>>::Output as Mul<<U as PowI>::Output>>::Output: Mul<CD>,
<<<f64 as Mul<Rho>>::Output as Mul<<U as PowI>::Output>>::Output as Mul<CD>>::Output: Mul<A>,
{
Force(0.5 * rho.0 * u.0.pow_i(2) * c_d.0 * a.0)
}
```

Frankly, I'm almost willing to use this as-is. It's either easy enough for small equations or the compiler gives the fully qualified types and trait requirements when things start getting too ridiculous. But, I feel like there should be a better way.

Math operations are commutative but the compiler desugars the function calls in the order given. So, while `a * b * c`

should be free to work as either `a * (b * c)`

or `(a * b) * c`

with both giving (approximately) the same output, there is no freedom to compose arguments in arbitrary order. It also doesn't allow the compiler to optimize their composition, but I'm pretty certain that wouldn't matter much.

But that consideration brings me to a second form; using a typed builder pattern so that every operation can be fully separated to only require the generic bounds pertinent to a single op. While it's a tiny bit more readable, it is even more ridiculous to type everything out: [playground]

And whenever things become cumbersome and repetitive I think macros, which brings me to a third alternative:

```
macro_rules! drag_force {
($rho:ident, $u:ident, $c_d:ident, $a:ident) => {{
let (Density(rho), Velocity(u), Coefficient(c_d), Area(a)) = ($rho, $u, $c_d, $a);
Force(0.5 * rho * u.pow_i(2) * c_d * a)
}}
}
```

It ends up relatively clean but loses some of the benefits of function typing, both for selectable floating point constants as well as having a named output type.

Are there any other options? I think where this all comes to is if it would be possible to make a closed set of types (which are all interoperable with consistent output types) implement a common trait `Real`

, such that you could minimize the bounds:

```
fn some_math<A: Real, B: Real>(a: ArgA<A>, b: ArgB<B>) -> Gives<impl Real> {
Gives(0.5 * (a.0 + b.0))
}
```

Any thoughts would be appreciated. I'm sure there's something I'm missing; Rust usually surprises me with it's elegance when finding the "right" way to do something.