How to define a trait that includes all operator overloading combinations for references and values?

How do we specify a trait that includes all operator overloading combinations for references and values? Essentially, I've a situation where I'd like to define an algebra that includes operations like Add, Mul, etc. for generic types. Depending on the situation, this may include adding a reference to a reference, value to a reference, reference to a value, and value to a value, so we have four different situations. As an example of this, we have the code:

// Create some random type that we want to represent as a Real
#[derive(Debug,Clone)]
struct Foo <Real> {
    x : Real,
    y : Real,
}

// Add the algebra for Foo
impl <Real> std::ops::Add <&'_ Foo<Real>> for &'_ Foo <Real>
where
    for <'a> &'a Real : std::ops::Add<&'a Real,Output=Real>
{
    type Output = Foo <Real>;
    fn add(self, other : &'_ Foo <Real>) -> Self::Output {
        Foo {
            x : &self.x + &other.x,
            y : &self.y + &other.y,
        }
    }
}
impl <Real> std::ops::Add <Foo<Real>> for &'_ Foo <Real>
where
    for <'a> &'a Real : std::ops::Add<Real,Output=Real>
{
    type Output = Foo <Real>;
    fn add(self, other : Foo <Real>) -> Self::Output {
        Foo {
            x : &self.x + other.x,
            y : &self.y + other.y,
        }
    }
}
impl <Real> std::ops::Add <&'_ Foo<Real>> for Foo <Real>
where
    for <'a> Real : std::ops::Add<&'a Real,Output=Real>
{
    type Output = Foo <Real>;
    fn add(self, other : &'_ Foo <Real>) -> Self::Output {
        Foo {
            x : self.x + &other.x,
            y : self.y + &other.y,
        }
    }
}
impl <Real> std::ops::Add <Foo<Real>> for Foo <Real>
where
    Real : std::ops::Add<Real,Output=Real>
{
    type Output = Foo <Real>;
    fn add(self, other : Foo <Real>) -> Self::Output {
        Foo {
            x : self.x + other.x,
            y : self.y + other.y,
        }
    }
}

// Compute a function on a slice of Reals
fn foo <Real> (x : &[Real]) -> Real
where
    for <'a> &'a Real :
        std::ops::Add<&'a Real,Output=Real> +
        std::ops::Add<Real,Output=Real> +
        Clone,
    for <'a> Real :
        std::ops::Add<&'a Real,Output=Real> +
        std::ops::Add<Real,Output=Real> +
        Clone,
    Real : Clone
{
    &x[0]+&x[1]+&x[2]
}

// Run foo on two different types
fn main() {
    let x = vec![1.2,2.3,3.4];
    let _x = foo::<f64>(&x);
    println!("{:?}",_x);
    let y : Vec <Foo<f64>>= x.into_iter().map(|z|Foo{x:z,y:z+1.0}).collect();
    let _y = foo::<Foo<f64>>(&y);
    println!("{:?}",_y);
}

Here, the function foo can operate generically on either Foo<f64> or f64 as well as other types. That said, the trait bound is already getting large and we've only implemented Add. Further, foo requires a trait bound on both Real as well as for <'a> &'a Real, the type and its reference. I'd like to encapsulate this into a single trait if possible. If not possible, I'd still like a single trait for the type and its reference.

That said, I'm not entirely sure how to define this type. I'd like something kind of like

trait MyFloat :        
    std::ops::Add <REFSELF,Output = NOREFSELF> +
    std::ops::Add <NOREFSELF,Output = NOREFSELF>
where
    Self : std::marker::Sized,
{}                                                  
impl <T> MyFloat for T where T:         
    std::ops::Add <REFSELF,Output = NOREFSELF> +
    std::ops::Add <NOREFSELF,Output = NOREFSELF>          
{} 

However, I don't know how to properly implement it. The problem is that Self may be a reference type, &T, or it may be a value type, T. I always need the output type to be the non-reference version and I need the different combinations of reference or not for the right hand side. That said, I don't know of any operations on types to strip the unwanted &. Or, candidly, if this is the best approach at all.

As a final comment, I don't believe the num crate solves this issue. Thought num provides traits like Float, it appears to me that they rely on the Copy trait as opposed to grinding through all combinations of reference vs value. Though, certainly, I may be mistaken.

Thanks for the help!

Currently blanket impl statements can be difficult to do correctly because if you have a blanket impl for T, you can't also have one for &T, since &T is also a type and therefore now matches two impls.

Typically we solve this using macros to manually impl the trait for all the types you want.

Hi, alice. I don't mind using macros to implement all of the different combinations for Add or the other operations. However, from a user point of view, it's not clear to me how to provide a single trait that insures that all of these combinations are present within the routine. Basically, I'd like to clean up the trait bound on Real in the function foo instead of listing all of the operations. Is this possible without a macro?

You can do it like this:

use std::ops::Add;

trait Real:
    Add<Self, Output=Self> +
    for<'a> Add<&'a Self, Output=Self>
where
    for<'a> &'a Self: Add<Self, Output=Self>,
    for<'a, 'b> &'a Self: Add<&'b Self, Output=Self>,
    Self: Sized,
{ }

Of course the two traits in the extends clause could also be listed as where clauses.

OK, cool. I think it's something like that. However, there's a recursive limit problem. To the above code, I added:

trait MyFloat:
    std::ops::Add<Self, Output=Self> +
    for<'a> std::ops::Add<&'a Self, Output=Self>
where
    for<'a> &'a Self: std::ops::Add<Self, Output=Self>,
    for<'a, 'b> &'a Self: std::ops::Add<&'b Self, Output=Self>,
    Self: Sized,
{}

And modified the function foo to

fn foo <Real> (x : &[Real]) -> Real
where
    Real : MyFloat,
    for <'a> &'a Real : MyFloat
{
    &x[0]+&x[1]+&x[2]
}

Without the bound on for <'a> &'a Real, the compiler complained. In either case, the current compiler error is due to a recursive ambiguity:

error[E0275]: overflow evaluating the requirement `&'a Foo<_>: std::ops::Add`
  |
  = help: consider adding a `#![recursion_limit="128"]` attribute to your crate
  = note: required because of the requirements on the impl of `std::ops::Add` for `&'a Foo<Foo<_>>`
  = note: required because of the requirements on the impl of `std::ops::Add` for `&'a Foo<Foo<Foo<_>>>`
  = note: required because of the requirements on the impl of `std::ops::Add` for `&'a Foo<Foo<Foo<Foo<_>>>>`
  = note: required because of the requirements on the impl of `std::ops::Add` for `&'a Foo<Foo<Foo<Foo<Foo<_>>>>>`
  = note: required because of the requirements on the impl of `std::ops::Add` for `&'a Foo<Foo<Foo<Foo<Foo<Foo<_>>>>>>`
  = note: required because of the requirements on the impl of `std::ops::Add` for `&'a Foo<Foo<Foo<Foo<Foo<Foo<Foo<_>>>>>>>`

What's the missing detail?

The constraints I made are intended to be used with only an Real: MyFloat constraint. What error are you getting?

The error is probably this:

That is, the extra bounds on &Self aren't usable by functions that require T: MyFloat.

The code:

// External libraries
use std::ops::Add;

// Create an aggregate floating point type
trait MyFloat:
    Add<Self, Output=Self> +
    for<'a> Add<&'a Self, Output=Self>
where
    for<'a> &'a Self: Add<Self, Output=Self>,
    for<'a, 'b> &'a Self: Add<&'b Self, Output=Self>,
    Self: Sized,
{}
impl <T> MyFloat for T where T:
    Add<Self, Output=Self> +
    for<'a> Add<&'a Self, Output=Self>
{}

// Create some random type that we want to represent as a Real
#[derive(Debug,Clone)]
struct Foo <Real> {
    x : Real,
    y : Real,
}

// Add the algebra for Foo
impl <Real> Add <&'_ Foo<Real>> for &'_ Foo <Real>
where
    for <'a> &'a Real : Add<&'a Real,Output=Real>
{
    type Output = Foo <Real>;
    fn add(self, other : &'_ Foo <Real>) -> Self::Output {
        Foo {
            x : &self.x + &other.x,
            y : &self.y + &other.y,
        }
    }
}
impl <Real> Add <Foo<Real>> for &'_ Foo <Real>
where
    for <'a> &'a Real : Add<Real,Output=Real>
{
    type Output = Foo <Real>;
    fn add(self, other : Foo <Real>) -> Self::Output {
        Foo {
            x : &self.x + other.x,
            y : &self.y + other.y,
        }
    }
}
impl <Real> Add <&'_ Foo<Real>> for Foo <Real>
where
    for <'a> Real : Add<&'a Real,Output=Real>
{
    type Output = Foo <Real>;
    fn add(self, other : &'_ Foo <Real>) -> Self::Output {
        Foo {
            x : self.x + &other.x,
            y : self.y + &other.y,
        }
    }
}
impl <Real> Add <Foo<Real>> for Foo <Real>
where
    Real : Add<Real,Output=Real>
{
    type Output = Foo <Real>;
    fn add(self, other : Foo <Real>) -> Self::Output {
        Foo {
            x : self.x + other.x,
            y : self.y + other.y,
        }
    }
}

// Compute a function on a slice of Reals
fn foo <Real> (x : &[Real]) -> Real
where
    Real : MyFloat
{
    &x[0]+&x[1]+&x[2]
}

// Run foo on two different types
fn main() {
    let x = vec![1.2,2.3,3.4];
    let _x = foo::<f64>(&x);
    println!("{:?}",_x);
    let y : Vec <Foo<f64>>= x.into_iter().map(|z|Foo{x:z,y:z+1.0}).collect();
    let _y = foo::<Foo<f64>>(&y);
    println!("{:?}",_y);
}

Produces the error:

error[E0275]: overflow evaluating the requirement `&'a Foo<_>: std::ops::Add`
  |
  = help: consider adding a `#![recursion_limit="128"]` attribute to your crate
  = note: required because of the requirements on the impl of `std::ops::Add` for `&'a Foo<Foo<_>>`
  = note: required because of the requirements on the impl of `std::ops::Add` for `&'a Foo<Foo<Foo<_>>>`
  = note: required because of the requirements on the impl of `std::ops::Add` for `&'a Foo<Foo<Foo<Foo<_>>>>`

...


error[E0277]: cannot add `Real` to `&'a Real`
  --> src/test09.rs:76:1
   |
76 | / fn foo <Real> (x : &[Real]) -> Real
77 | | where
78 | |     Real : MyFloat
79 | | {
80 | |     &x[0]+&x[1]+&x[2]
81 | | }
   | |_^ no implementation for `&'a Real + Real`
   |
   = help: the trait `std::ops::Add<Real>` is not implemented for `&'a Real`
   = help: consider adding a `where &'a Real: std::ops::Add<Real>` bound
note: required by `MyFloat`
  --> src/test09.rs:5:1
   |
5  | / trait MyFloat:
6  | |     Add<Self, Output=Self> +
7  | |     for<'a> Add<&'a Self, Output=Self>
8  | | where
...  |
11 | |     Self: Sized,
12 | | {}
   | |__^

error: aborting due to 2 previous errors

As such, it still looks like there needs to be a bound on &Real in foo. That said, even if it's present, it doesn't fix the recursive issue.

By the way, I can solve the issue with an ugly set of traits:

// Create an aggregate floating point type
trait MyFloat:
    Add<Self, Output=Self> +
    for<'a> Add<&'a Self, Output=Self>
where
    Self : Sized
{}
impl <T> MyFloat for T where T:
    Add<Self, Output=Self> +
    for<'a> Add<&'a Self, Output=Self>
{}
trait MyFloatRef <NonRef>:
    Add<NonRef, Output=NonRef> +
    for<'a> Add<&'a NonRef, Output=NonRef>
where
    Self : Sized
{}
impl <T,NonRef> MyFloatRef <NonRef> for T where T:
    Add<NonRef, Output=NonRef> +
    for<'a> Add<&'a NonRef, Output=NonRef>
{}

Along with the definition for foo with

fn foo <Real> (x : &[Real]) -> Real
where
    Real : MyFloat,
    for <'a> &'a Real: MyFloatRef <Real>
{
    &x[0]+&x[1]+&x[2]
}

What I don't like about this solution is separate traits for MyFloat and MyFloatRef. Further, we need a type parameter for MyFloatRef to get the non-reference form of the type. Is there a way to clean up these definitions?

I think that's the best you can do, given the issue I mentioned before. I ran into this problem trying to generalize reference operations in num-traits too. You might like NumRef and RefNum though.

OK, cool. To clarify, then, there's no way to remove the NonRef type parameter from the trait:

trait MyFloat <NonRef>:
    Add<NonRef, Output=NonRef> +
    for<'a> Add<&'a NonRef, Output=NonRef>
where
    Self : Sized
{}
impl <T,NonRef> MyFloat <NonRef> for T where T:
    Add<NonRef, Output=NonRef> +
    for<'a> Add<&'a NonRef, Output=NonRef>
{}

if we want to apply this trait to both the value and reference types? Does RefNum get around this?

I couldn't find a way around that -- RefNum uses a Base type parameter. If you like, you can read a little bit of discussion in num issue 94 and pull request 283.