Taking `r: &mut R` vs `mut r: R`; `where <R: io::Read>`

There's this trait used for recursive(!) deserialization of a binary format:

pub trait Decodable: Sized {
    /// Decode an object with a well-defined format
    fn consensus_decode<R: io::Read>(r: R) -> Result<Self, Error>;
}

Since decoding is most often recursive (decoding all the subfields using same reader), having to pass the ownership to the consensus_decode, leads to code that looks like this:

impl Decode for Foo {
    fn consensus_decode<R: io::Read>(r: R) -> Result<Self, Error> {
       self.field1.consensus_decode(&mut r)?;
       self.field2.consensus_decode(&mut r)?;
       self.field3.consensus_decode(r)?;
   }
}

as you can see, we have to pass r by mutable reference to all but last call, where we can yield the ownership. This is somewhat inconvenient, but there was also a suspicion that it leads to needless explosion in monorophisation, as consensus_decode has to be generated for ever-increasing SomeReader, &mut SomeReader, &mut &mut SomeReader, &mut &mut &mut SomeReader.

To test this theory I've changed the code to use:

pub trait Decodable: Sized {
    /// Decode an object with a well-defined format
    fn consensus_decode<R: io::Read>(r: &mut R) -> Result<Self, Error>;
}

so we can always:

impl Decode for Foo {
    fn consensus_decode<R: io::Read>(r: &mut R) -> Result<Self, Error> {
       self.field1.consensus_decode(r)?;
       self.field2.consensus_decode(r)?;
       self.field3.consensus_decode(r)?;
   }
}

And indeed the simple test suite binary shrank by 900K in debug and 100K in release mode, which performance staying roughly the same.

Seems like a clear win, but I just want to get a second opinion. Does anyone see any downstide or a problem with this approach?

No, I don't think there's any downside to doing this.

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.