On the contrary, those are the ones that benefit the most. BigInt + &BigInt
can reuse the allocation of the by-value lhs. You can use mut
bindings and +=
to get buffer reuse, but that requires turning a fundamentally functional/declarative/pure operation into an impure imperative one.
Part of the selling point of Rust is that you can get imperative performance without requiring writing imperative code (e.g. iterator combinators).
You can say that saving the allocations doesn't matter as much because the operation being done is more expensive, making the allocation relative cost lower, and yes! Extra allocation for convenience is good, actually! But Rust goes to great lengths such that (almost) nothing requires allocation unless you ask for it (e.g. for the convenience it gives).
Saying roughly "Unlike Java, you can overload operators and use them on BigInt
! ...Unless you want to avoid a bunch of temporary allocations, at which point you need to go back to using methods like .add(_).sub(_)
etc." seems quite unfortunate.
Is Rust sometimes a bit too eager to eliminate allocations and substitute them with an excess of stack copies? Likely! But one of the major selling points of Rust is that you have control over allocation. I can all but guarantee that we'd get people doing C++ std::move
like shenanigans if +
required references. Writing "alloc optimal" code is a bit of a brain worm through Rust library authors. I know 'cause I do it.
I absolutely agree that the current behavior is far from great, but not to the point that forbidding by-value operations would be a preferable solution. Rather, an autoref behavior seems a good solution, though doing it for two input types simultaneously is a bit problematic; see e.g. Swift operator overload lookup woes.
Yes, the ABI will use pass-by-reference transparently for copied/moved parameters. Do what makes sense ownership-wise; let the compiler optimize it from there. Especially if stuff gets inlined; then it literally doesn't matter.
...but it unfortunately does still end up doing memcpys a lot of the time, especially for Copy
values. The reason is complicated but we're working on it and most of the reasons aren't fundamental, just missed optimization.
(There are at least one somewhat fundamental limitation resulting from Copy
preventing destructive moves. Combine with an analysis-escaped reference and you can't turn the last copy into a destructive move, because there might be live references. Funny potential workaround not yet recognized by the compiler: *&mut
to invalidate said references.)