All of the (non-assignment) operators traits in std::ops (e.g. Add, Sub, Shl and so) consume self.
What was the reasoning behind that decision? Usually, when you want to change the LHS value, you just use the assignment operator (e.g. +=). I understand the reason to consume RHS, but wasn't it better if they took &self (not that this can be fixed now, I'm just interested in the reasons for this decision)?
For primitive integers, it's not an improvement to take them as a reference.
For string concatenation, many languages suffers quadratic copy by allocating every intermediate results on long string concat chain(e.g. "name: " + this.name + ", age: " + this.age + ", job: "...). Rust avoids it by consuming the LHS and reuse its buffer if possible.
Big integers are usually implemented with the Vec<u32> or similar, and they have same problem with the String above.
Thanks. I still not sure I agree with this, since long string concats can be implemented by a macro (e.g. format!(), but more efficient), and long bit integers concats are pretty rare and it seems unplausible to downgrade the experience for the common case for this, but at least I understand now.
That has nothing to do with the trait taking self, though. The library could impl Add<&str> for &str { type Output = String; ... } to make that work if it wanted to, it just has chosen not to. (Well, there are some bits that would be tricky because of coherence, but the standard library could deal with that via compiler tricks if it really wanted to -- it already has a bunch of those for inherent methods on primitives, for example.)
If you want to do that, you can do[s, "abc", " ", "more"].concat() -- no macros needed, and fully efficient.
Oh, yeah, String + &str already works, so I assumed you didn't mean that one. If s is a String, you don't need to clone it in s + "abc".
I'm not sure what your history is, but if you're familiar with Java, remember that Rust's String is more like Java's StringBuffer than Java's String -- it's made for efficient repeated appending, so moving into the + is really important (as Hyeonu was saying).
If the trait is defined as taking a reference, then every implementation must work by reference. Taking just self, though, let’s you have a choice: You can write implementations for any or all of T: Add<T>, T: Add<&T>, &T: Add<T>, and &T: Add<&T>.
I kind of consider impl Add<_> for String a bit of a misfeature. When I was new to rust about 6 years ago, it seemed great, but that was mostly because the most recent language I'd been using then was Java i.e. it was a (superficial) recognizability thing.
I consider it a misfeature not so much in terms of performance, but ergonomics. The reason is that when you need to append a lot of different things, it isn't really an improvement over String::push_str().
And at least the latter nicely vertically aligns because they're just regular ol' method calls. That's not to say + "blah" can't be made to do that, but it's more work than for method calls. And I don't even know what would happen if you cargo fmt a project with such concats in it.
Yes, but that's why we have AddAssign. My argument was that while many times you do want the moving, more times you don't, and and when you do it's easy to += or push_str(). To me at least, intuitively, + doesn't involve moving or changing the left expression. But I understand the concern with chaining, where you do want it to.
With a + b consuming both sides, it can reuse either or both data structures. This wouldn't work even with += if the rhs was always passed by reference.
Also a - b is able to reuse b, which wouldn't be possible with -= and pass by reference.
In an ideal world I wish a += b was just syntax sugar for a = a + b, given that the latter already consumes both sides. Unfortunately, this doesn't quite work because of the rules of moving from behind references, due to potential panic unwinding.