Type "not constrained" issue -> workaround?


When I implement this for my struct Tense, I have the error 207: the type parameter T is not constrained by the impl trait, self type, or predicates.

impl<T, U, V, W> algebra::group_like::additive::Add<V> for U
	T: algebra::group_like::additive::Add<W, Output = T>,
	U: AsRef<Tense<T>>,
	V: Deref<Target = W>,
	type Output = Tense<T>;

	fn add(self, rhs: dyn AsRef<V>) -> Tense<T> {
		Tense::<T> {
			val: self
				.map(|(a, b)| a + b)

I saw some issues about this problem: here and here, but it didn't helped.

Is there a way to fix this code? Thanks.

Looking at the error description, a type parameter in a trait implementation must be either part of the trait, part of the implementing type, or bound as an associated type. The type T is none of those since the only associated type binding it is self-referential, and there may be more than one type that can have W added to it to give itself. The bound on U, and hence the associated type Output, is ambiguous, and there is no way for the compiler to figure out what references to Output refer to.

It's hard to say how to fix it without knowing exactly what you're trying to do.

#[derive(Clone, Debug, PartialEq)]
pub struct Tense<T> {
    pub val: Vec<T>,

Tense<f32> represents a vector of f32, Tense<Tense<f32>> represents a matrix of f32, etc. I'm trying to do a.add(b), where a is either Tense or &Tense, and b is any type which can be added to T, or an array of items which can be added each one to T. So I can do:

a.clone() + b.clone();
a.clone() + &b;
&a + b.clone();
&a + &b;
&a + vec![1, 2, 3];
&a + vec![vec![1, 2, 3]];
&a + 123;

The parameter is "unconstrained" because nothing about the trait impl specifies it, so (among other issues) there's no way to tell which add method to use. Not even giving its return type works, because any impl could potentially return Tense<T>.

I suspect that the fundamental problem is that you're trying to write a "blanket impl" for any U. In your explanation, you write,

You've actually written something a lot more generic than that, which is an impl that provides addition for any type that could ever be written which happens to implement AsRef<Tense<T>>.

I'd try writing the impl for &Tense<T> instead of for U: AsRef<Tense<T>>, and removing the U parameter. Then, if performance or your use case requires it, also provide a by-value impl for Tense<T> (which is going to look a lot alike, but could use into_iter to move contents of the tensor).

Let me know if that didn't make any sense, I'm having a fuzzy brain day.

1 Like

I tried this:

impl<'a, T, U, V> algebra::group_like::additive::Add<U> for &'a Tense<T>
    &'a T: algebra::group_like::additive::Add<V, Output = T>,
    U: Deref<Target = V>


no method named `iter` found for type `U` in the current scope

I can't find any trait providing a method iter. What I don't understand is that Tense::iter exists (I think it's because Tense implements Deref), but U::iter doesn't, while U implements Deref too.

There is no problem with iter if I impl Add<[U]>, but it doesn't seem to be a good way, is it?