I don't see how in a principled typed system that a u32
is any more special than any other type we can build with a module. A u32
has its own built-in module also, that enforces its 0 to (2^32 - 1) range of integers.
The language allows us to write to Self
from trait implementations. If this writing is inconsistent with the invariants of another trait, then in the presence of A) shared references to mutable instances and/or B) lack of encapsulation, then those invariants stated locally on the trait are no longer a single-point-of-truth. Whereas, with A and B prevented every where, then the single-point-of-truth is maintained. Your astute example showed that if we enforce the invariants in the data type and if the traits create no additional invariants that depend on writing to Self
, then we don't need to prevent A in order to trust the locality of the trait invariants, i.e. that the trait invariants are single-point-of-truth.
When we characterize what is unsafe in a language, in my opinion we are stating what the compiler doesn't enforce. As you point out, in C and C++ this level of unsafety is very low-level. I believe our goal is always to raise the level of safety that we can trust can't occur due to human error (and adding the least tsuris and complexity to achieve it). It is a never ending goal of improvement. There will never be a 100% safe way to program.
I have no argument with the fact that we can create incorrect semantics in our code and this is a bug in our code and not the responsibility of the compiler. It is impossible for the compiler to enforce every invariant in the semantics of our program because the potential entropy is unbounded. My point is we can delineate the constructs (design patterns) where the unsafety begins in a language and which constructs we can trust are enforced.
The devil is in the details. As you astutely showed that intersection of data types is no problem if the data types enforce their invariants and the traits built on top of them don't add additional unenforced invariants. Immutability is another lever to consider. I am contemplating/conjecturing there may be a way to do intersection types that increases our degrees-of-freedom for composition without adding any new unsafety but I am not sure about that.
My point is that if we have encapsulation, then if the invariants of the traits are only depending on enforced writing to Self
then immutability eliminates the ability to break the invariants between traits. Single-point-of-truth (not the Wikipedia definition, although I think I really mean orthogonality of trait invariants) is a key desired attribute of a programming language. You showed that if traits are not adding any invariants depending on writing to Self
that aren't already enforced in the encapsulation of the data type, then immutability is not needed.
I aim to have non-muddled, orthogonal, and consistent definitions of concepts. I like to think in terms of generative essence abstract rules and concepts, because I am loathe to memorize a plurality of special cases, because it is very prone to human error. Please correct me if I am mistaken, it is much appreciated.
Edit: I suspect we may share the concern about Scala that it mixes too many paradigms (e.g. subclassing and implicit type classes) so there is no way to realistically delineate the boundaries of safe patterns. Well that is my suspicion but not proven.