what is the difference between implementing a triat for type's
reference
impl<'a> BitOr for &'a Json {
type Output=Json;
fn bitor(self, rhs: &Json) -> Json {
}
}
and for the type itself.
impl BitOr for Json {
type Output=Json;
fn bitor(&self, rhs: &Json) -> Json {
}
}
Thanks,
BitOr::bitor
is defined to only take self
, so you can't write it this way at all. Your implementation signature must match the trait. So if you want to enable BitOr
on a &Json
, your first way is the only option.
If you control the trait too, then you have a design decision whether to take self
or &self
. This will depend in part whether you think it ever makes sense to call the method with self
by value, then it should be defined that way. If the method never needs the value, then &self
is fine.
Note also that method calls usually will auto-ref/deref, so a call on a value can automatically turn into a call on its reference. With operator traits like BitOr
though, you don't get this behavior when it is used with the |
operator. That is, using direct values json_a | json_b
won't work if the trait is defined on references.
2 Likes
Thanks for the really good description.
Did an inventory of all operations that are defined as trait.
Traits on Arithmetic and bitwise-operations are defined over Self. Listed:
Neg, Not, Mul, Div, Rem, Add, Sub, Shr, Shl, BitAnd, BitXor, BitOr
Assign-variants of arithmetic and bitwise operation take mutable-reference.
Index and IndexMut are correspondingly defined over &Self
and &mut Self
Meanwhile all comparision operation <
<=
>
>=
==
!=
take
operands by shared borrow.
Deref
and DerefMut
use shared borrow and mutable borrow respectively.
Index
and IndexMut
use shared borrow and mutable borrow respectively.
1 Like