Good write-up! I want to call out two specific points you made that, while both fairly simple and "obviously true" in hindsight, were not necessarily "obvious", in that I'd never quite thought of them directly until you phrased it so clearly.
The first is right at the beginning: that basic ownership and drop semantics actually qualifies as a two-state instance of this pattern. We're doing it all the time, and the rest is not an additional pattern, but rather just ways of adding additional states. Just a small perceptual shift, but useful perspective. You tie it very nicely to the conclusion that these basic semantics are at the root of it all in comparison to other languages.
The second is in this comment with regard to serde
:
Serializer
is a trait that third-parties will implement to define new data formats. Because the trait is specified using the typestate pattern, it's basically impossible for an implementation to misbehave using safe code, except for randomly panicking. I suspect this robustness is part of the reason for serde
's success.
Having implemented this trait several times while essentially having no clue what I am doing, I can attest to this directly.
The example I would have naturally thought of was the design of the embedded_hal
traits. A lot of work went into getting those right, so that they'd be ergonomic for end-users as well as nicely implementable for HAL implementors on various different hardware.
But the serde
example is much better. In the HAL case, the implementors of specific hardware crates are working at a lower level again, and are easily thought of as wizards and experts in the domain. It's very easy, as an end-user, to think of them all of a piece, as parts of a coherent ecosystem - which is of course the intent.
What the serde
example really points out is that the trait pattern can be used to constrain (provide safety) for external end-user implementations as well. A library author can provide an enforceable type-state pattern of traits for an end-user consumer of the library to implement, safely.
And, of course, to the compiler there's not much difference between these cases, once the realisation has hit.
Finally, a wish:
Optionally (not shown above), you can also have the operation return the reference to self, which lets the user choose whether or not to use method chaining.
I wish you had shown that example, because I vastly prefer using libraries that way. Ever so often I trip over one that didn't provide this, and it always feels a little cumbersome, like .. just a missed opportunity. Since I suspect people will be referring to this post for a while to come, it would be nice to show "best practice", though I know you left it out because it wasn't really an important part of your point.