Okay I didn't understand that the implementation is employing a raw pointer which turns off all borrow checking. So that is why thought it was employing a borrow checked pair, but now I understand it instead turns off all safety in user land code.
If the implementation allows overlapping ranges then it is no longer safe. Perhaps some algorithm may require overlapping ranges.
The borrow checking is doing nothing to help insure safety with multiple mutable iterators. We are relying on the semantics of the implementation which the borrowck can't check.
--- no reply required to the following, just expressing my philosophy as it currently stands --
What I am trying to explain is that philosophically I support a PL design principle that type systems are valuable when they aid documentation, clarify, simplify, and promote consistency— not when they must be routinely avoided and casted away. Type systems can't and shouldn't have the goal of catching all bugs. Of course I want a compiler to check for errors that don't require a globally enforced contract which is routinely casted away (especially silently).
Afaics, global lifetime and borrow checking is fundamentally fighting against the fact that the universe is composed of partial orders, not a total order. And for what gain? To guarantee non-aliasing contracts where it hasn't been SILENTLY turned off with raw pointers. Where is the consistency, clarity, etc? Afaik, the significant gains from tracking mutability come from partitioning to enable parallelism or immutability which enables concurrent scaling, but these are typically high-level, macro level design decisions. A low-level check that attempts to enforce a global, total order seems to me to be incongruent with the physics of our existence.
I do roughly envision a possibly very significant advantage of compile-time checking the borrowing of memory lifetimes local to the function body, if that can help offload from GC the destruction of temporary objects and if it won't require lifetime parameters. And that is because thrashing the GC with temporary objects appears to be one of the most significant weaknesses of GC. That is only enforcing a partial order, and not attempting to compile-time check a global consistency which can't exist. In short, I don't want a type system that silently lies to me very often. As another poignant example, subclassing is an anti-pattern because it routinely lies about the implicit invariants.
I like typeclasses without subtyping, because afaik it doesn't ever silently lie, unless you cast it away somehow. The way that you cast away traits is by destructuring the trait object with RTTI employing some cast to data type such as a match-case or instanceOf. But at least it is explicit and not silent in every instance. This becomes an anti-pattern in the sense that it disallows extension in one dimension of the Expression Problem. That is why I devised that complete solution to both dimensions of the Expression Problem.
P.S. I was drawn to Rust by the excellent typeclasses and Crate system. It is unfortunate that I can't adopt Rust for my current project because:
- The emphasis on these low-level global borrow checking for resources lifetimes and mutability.
- Lack of GC semantics.
- Lack of runtime for asynchronous programming.
- Lack of Javascript compile output target (which would provide the JIT compilation portability I need).
I do realize the above are due to Rust's different focus and use case target.