Borrowing rules with returned references to self


I have a borrow checker question:

struct Foo {
    f1: u64,
    f2: u64

impl Foo {
    fn getf(&mut self, which: usize) -> &u64 {
        match which {
            1 => &self.f1,
            2 => &self.f2,
            _ => panic!(),

fn main() {
    let mut f = Foo::default();
    let f1 = f.getf(1);
    let f2 = f.getf(2);

If I compile this, I get an error that says I cannot borrow f as mutable more than once at a time, pointing to the line that uses f1 as a mutable borrow of f. Why is that the case here though? Why is the use of f1 considered a use of a mutable borrow of f despite it being an immutable reference?


It originated from a &mut self, so f1 is considered to hold an exclusive borrow of the whole struct, despite being a simple &u64. Even if the compiler did support "downgrading" that to a shared borrow of self, that would still prevent the second &mut self call.

To get simultaneous mutable references to distinct parts of the struct. you need to borrow them directly, so the compiler can tell they're distinct through local analysis. You can abstract that in a method that returns both references, fn get_both(&mut self) -> (&u64, &u64).

1 Like

&mut self allows mutation, and if the code was:

fn getf(&mut self, which: usize) -> &u64 {
    self.f1 = random();

then you would end up returning an "immutable" reference that gets mutated. You don't have such code in your function, but the borrow checker checks against interfaces, not implementations.

Thanks (and thanks to kornel!), I think I understand why this isn't allowed. I'm struggling to figure out an alternative to my specific problem though. What I'm trying to do is have an API that looks sort of like:

Cache {
    get(&self, object_id: u64) -> &Object

and then be able to do something like:

let c = /* construct cache */
let o1 = c.get(0);
let o2 = c.get(5); // Maybe I learned about 5 by inspecting o1
// ...

This is all single-threaded, but the problem I'm running into is that I want to be able to have multiple outstanding references to objects owned by the cache, but the cache also needs to be mutable so that it can insert elements into the cache in the implementation of get. So, based on that, I believe I need to make the internal data structure have interior mutability and I currently have:

use std::cell::{RefCell, Ref};
use std::collections::HashMap;
use std::ops::Deref;

type ObjectId = u64;

struct Object {
    payload: Vec<u8>,

struct Cache {
    cache: RefCell<HashMap<ObjectId, Object>>,

impl Cache {
    fn get_object<'a>(&'a self, oid: ObjectId) -> impl Deref<Target = Object> + 'a {
        Ref::map(self.cache.borrow(), |m| m.get(&oid).unwrap() )

    fn populate_cache(&self, oid: ObjectId) {
        let mut b_cache = self.cache.borrow_mut();
        if b_cache.contains_key(&oid) {

        b_cache.insert(oid, Object {
            payload: vec![0; 1024],

fn main() {
    let cache = Cache::default();
    let p0 = cache.get_object(0);
    let p1 = cache.get_object(1);
    println!("{:#?} {:#?}", p0.payload, p1.payload);

This panics though because (I think) p0 really has a borrow out on the internal map, so when I make the next get_object call the RefCell panics because I try to borrow the cache again.

Any tips / pointers on what sort of pattern I should be using here?

What if you insert into the hashmap and it reallocates? Then all your multiple outstanding references would become invalid. This is why the borrowchecker protects you. RefCell would protect you at runtime instead, but it would still fail with a panic if you try to mutate it while having outstanding references.

You may want to look into thread-safe hashmaps such as evmap or chashmap, but look out for deadlocks — if you're asking for one item and you have a lock on it somewhere else in the same thread, you have a deadlock. Specifically chashmap appears to lock the entire map on reallocation, so if you have a lock somewhere else in the same thread, the reallocation would wait for that other lock to be released, but that would never happen as it's in the same thread, so it never gets there.

Another approach is to reference count every value in the hashmap. This way you can return objects from the cache without keeping a lock on the hashmap around.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.