Cannot borrow *self as mutable more than once

I have a simple problem with two simple workarounds, neither of which I like.

fn recv_data(&mut self) {
    let data = self.recv();
fn recv(&mut self) -> &[u8] { ... }
fn on_recv(&mut self, data: &[u8]) { ... }

I don't control the return type from recv().

Not surprisingly, I'm getting

error[E0499]: cannot borrow `*self` as mutable more than once at a time
   --> src/bin/
158 |         let data = self.recv();
    |                     --------- first mutable borrow occurs here
160 |         self.on_recv(data);
    |         ^^^^         ----- first borrow later used here
    |         |
    |         second mutable borrow occurs here

One workaround it to pass a copy of data to on_recv, something I'd like to avoid. Another is to replace the call to on_recv with the contents of the function. Is there another approach that I might like better?

I assume that recv() would be populating a buffer owned by self. In that case recv() does not need to return the slice, instead it could return:

  1. nothing,
  2. a usize representing the starting index of the appended data,
  3. a Range of the received data.

The on_recv function would also need altering, since there is no way to call &mut self and pass it a reference to data owned by self (which if self contains a buffer, is what would be happening).

Do you mind sharing your data structure that these functions are being implemented on?

1 Like

I'm using pnet::datalink for speaking Ethernet. It provides an rx/tx pair, which is where the requirement for mut comes from, and it returns the &[u8] on a receive.

Possibly relevant. Nice API to temporarily detach a field?

If it's practical, you could split the fields of Self between two types, of which one has recv and the other on_recv. Then it becomes as simple as

    fn recv_data(&mut self) {
        let Self { receiver, doer } = self; // split the borrow

If the "doer" and the "receiver" overlap such that you can't split up Self like this, then manually inlining on_recv might be your best bet, or see this post by nikomatsakis for more ideas.

1 Like

Another option is to carry a re-usable buffer.

// Assuming Self has buffer: Vec<u8>
fn recv_data(&mut self) {
    let mut data = std::mem::take(&mut self.buffer); // take buffer out
    // clear buffer and reassign it back
    self.buffer = data;
fn recv(&mut self) -> &[u8] { ... }
fn on_recv(&mut self, data: &[u8]) { ... }

There's no buffer in self, but I can add one. However, it looks like your solution copies the data. Does it?

Looks useful, but not in this case. In my code self is just a wrapper around the tx and rx. The data behind the slice is buried somewhere in the pnet library.

That's what I ended up doing. One struct holds rx and implements recv(); the other holds tx and implements on_recv(). The refactoring wasn't as big a deal as I thought it would be, but I don't think the code is as easy to follow as the original. Hey, but at least it works without copying the data.

1 Like

Yeah, it does copy the data.
In my experience, copying byte slices into temporary buffers is quick, and since the buffer is reused, the allocation cost is amortised.
It should be benchmarked though to see if copying the data is the bottleneck.

But what is the upside to copying the data if it can be worked around with simple refactoring?

Edit: I had missed this note: "I don't think the code is as easy to follow as the original"

I've got a version that copies the slice to an array, which I was trying to avoid. It looks like there isn't a simple, copy-free approach, though.

There might be other considerations as well, such as the recv function mutating more than just the receiver field (for example, updating a log string or counter).

I'm getting used to it. We'll see how well it stands up in a code review.

1 Like

I may have pointed you in the wrong direction with Vec::extend. There is a specialised function Vec in std::vec - Rust which would be more performant. Still copying though...

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.