Should you implement `Deref` for newtype wrappers?

At the high level, T should implement Deref if and only if &T should always be usable as if it were &Target, as well as the contrapositive that any API asking for &Target should accept &T, without any value of T which violates that expectation.

Furthermore, T should have no methods (that are callable with method syntax) which differ in logical semantics from the method on Target. If Target is a generic type (e.g. Box<T>: Deref<Target=T>), this essentially means T should have no methods. If T and Target are defined by the same author (e.g. String: Deref<Target=str>), then T can have its own methods, as the author can avoid creating a method name conflict. But, sneakily, if T and Target come from different crates, defining methods for T is a potential future problem if Target also decides to define a method with the same name in the future.

So, as a simple, first-order takeaway: if the wrapper is a trivial marker, then it can implement Deref. If the wrapper's entire purpose is to manage its inner type, without modifying the extant semantics of that type, it should implement Deref. If T behaves differently than Target when Target would compile with that usage, it shouldn't implement Deref.

And a second-order takeaway: if ownership semantics would allow you to implement DerefMut but other concerns make doing so questionable, you usually shouldn't implement Deref. (There are of course exceptions, e.g. Lazy* types.)

7 Likes