Implementing find_by_name: cannot move out of borrowed content


Would you advice reading the second edition of the book? I see their section on ownership is a bit different (it has pictures :smile:):


Read either, I haven’t looked at the second version but I suspect it is better.



Also, this was my first post and I’ve been so happy with all the expert advice and positive comments I got.

This community :heart_eyes: !


@roSievers already provided the answer, but just wanted to mention that it helps to think of borrowing as, conceptually, a single threaded reader/writer lock - you can have many immutable borrows XOR single mutable borrow.

Rust does support disjoint borrows, which would allow this (and even two mutable borrows). But, it needs to “know” that the references point to, well, disjoint values. The case it knows this “out of the box” is for struct fields. In the example here, Rust cannot prove that your find_by_name fn returns different values (or rather, refs to them) for the two string keys, even if you happen to know that invariant is maintained.


That’s a very useful analogy! So, two immutable borrows are OK (I tried this and it works indeed), but not having immutable borrow(s) AND also a mutable borrow.

Does this mean Rust would be able to optimize for out-of-order execution?


So aliasing, in general, is a compiler’s worst nightmare :slight_smile: Rust’s borrow enforcement helps here because the compiler can trust that a value referenced immutably cannot change - that opens up some optimization opportunities for it (e.g. hoisting field reads into a register, and keeping the value in the register/stack spill slot rather than re-reading from memory). It can also re-order operations more freely with immutable borrows.

IIRC, however, Rust currently doesn’t take full advantage of this (something about LLVM breaking when ‘noalias’ attributes are specified in the bitcode, but I don’t know all the details). It’s possible that with MIR (and/or LLVM getting better) the compiler can do a better job.


Just to say how big of issue it is, especially when dealing with scientific calculations (which often involve vectors and matrices, for which SIMD optimizations work really well). Assume the following code written in C or what you have:

void add(int *out, const int *a, const int *b, int n) {
    for (int i = 0; i < n; i++) {
        out[i] = a[i] + b[i];

This is, well, a straightforward vector addition. It can be optimized by using SIMD instructions to add multiple values at once… haha, no. Doing so is just waiting for a bug report to appear. Say, a following program was written by a crazy person who decided to use vector addition to calculate Fibonacci sequence numbers (for some reason).

#include <stdio.h>

int main() {
    int ab[10] = {1, 1};
    add(ab + 2, ab, ab + 1, 8);
    for (int i = 0; i < 10; i++) {
        printf("%d\n", ab[i]);

This program uses multiple overlapping memory areas. Existence of such a program means that compiler cannot simply do additions in bigger blocks, say 128 bits, because then write would be delayed, while this particular program depends on every write to be done in sequence (without function waiting with doing write to out variable). This is expected execution of a program:


However, when working with assumption of no aliasing, following output may appear (that said, the behaviour is undefined, compiler is free to use whatever optimizations it may want to).


You can see it noticed old values, but it didn’t notice updated values (as compiler read them before writing to those), so Fibonacci number sequence didn’t appear as it should. Compiler can generate better code for vector addition when it doesn’t have to worry about people writing silly code which reuses output for input. This is what Rust aliasing XOR mutability type system provides.