I think before we may, meaningfully, continue, we should discuss history. Otherwise we would just go, endlessly, in cycles.
Difference between “high-level languages” and “low-level languages” goes back to a time 40-50 years ago. First generation programming languages — machine codes (yes, IBM 1401 had human-readable machine code and you write programs directly in it), second generation — assemblers (firmly ties to the architectures of a particular computer), machine-independent third generation, then fourth generation frees programmer from the needs to think about memory or pointers and with fifth generation programs are written by non-programmers and you no longer talk about code in terms which are specific to the actual hardware implementation of computers. Japanese businessmans take over the world without knowledge of English and history is altered permanently.
That was the vision. If it have been realized then we wouldn't need that talk about “high-level languages” or “low-level languages”. We would talk about 4GL or 5GL (or maybe even 6GL or 7GL).
But… it failed. Utterly and completely. Not only we haven't switched to the 5GL languages, we haven't actually embraced 4GL languages. Leaky abstractions, you know. We are still stuck with 3GLs and DSLs (which can not be, actually, considered 4GL languages because they are not general-purpose languages).
Nope. They allow you to model the problem in a way that is expressive in a computer. Not in any language, but in computer in general. In fact most of them would be needed if you would try to do the same calculations without a computer, just using pen and paper.
If that's sloppiness is so valuable, if it's really a good thing not to know whether you own the structure or just borrow it, if the fact that function receives dictionary just to look some things there should not be expressed differently from the fact when function receives dictionary for modifications, then why alll sucessfull “high-level languages” eventually grow things like type annotations,
const (it was added to C by C committee, it wasn't in K&R C) and Options (like Java Optional or C++ optional)?
The answer is obvious: lack of difference between
Arc<T> doesn't make Python easier to use. it just makes it sloppier. Makes it possible to kick the can down the road so to speak. Push the decision to the further date. Which makes it a bit easier to learn (not easier to use). That's all.
Because, if you will think about it:
Box<T> have the exact same representation on machine level. They are even sharing it with
Option<Box<T>>. They are signifying the programmer's intent:
Box<T> — I own object T.
Nobody else has access to it and I can do whatever I want with it.
&mut T — I was lended object T.
While it's in my hands I can do whatever I want and nobody else would interfere.
&T — I can look on object T.
Someone else may be looking, too, but since they couldn't change it while I'm looking… still easy.
Arc<T> — danger… shared ownership…
Someone else may touch that object while I'm looking away. Have to read the documentation to know who, when and how.
Rc<T> is not that important but, thankfully, can be ignored: just use
Arc<T> everywhere and switch to
Rc<T> when program would be fully finished (or maybe never, it's Ok, too).
Indeed. That's why have this proliferation of “sloppy languages”. The end result is, essentially, “the vicious cycle”:
- Because people couldn't grasp all the concepts needed to write good code simultaneously “sloppy languages”, “languages for non-software engineers” are invented.
- Then, if these languages become popular, larger and larger programs are written in these languages and these requirements (which are still requirements, the fact that it's not enforced by language doesn't mean it's not needed in principle) are understood. Style guides and linters are invented.
- Eventually someone says “enough is enough” and tries to add something to the language.
Since it's not really possible to add it a clear way (existing millions lines of code were written without rules being enforced thus you can not just start enforcing rules unconditionally) and speed of the produced result is still low an attempt to bring “low-level” (but in reality just stricter) language is made.
But ultimately it's easier to just use so-called “low-level” language because all parties involved have passed the barrier and no longer need to use “sloppy languages” (the few who created the majority of problems and steadfastly refused to learn anything would be fired and would go and tell everyone who may listen that everyone else conspired against them).
- New generation of developers arrive and we go to step 1.
And I think that we should be open to breaking down the language barrier for the "80%".
Why? Yes, it's serious question.
Software engineers are about 1.5-2% of US population (and less than that in the rest of the world). If 5% can easily grasp the concepts then there should be enough of them to not need general-purpose “sloppy languages” at all.
And for non-software engineers there are no need to have these “sloppy languages” in principle: you may give them DSLs which would be limited but would be easier to learn and use (precisely because they wouldn't be general-purpose).
Maybe, or maybe not. C++ programmers are special not because they have special attitude, but simply because so much of Rust already is in C++ (only in Rust compiler enforces rules while in C++ it's enforced in C++ standard and not in form “if you break these rules then you program would fail to compile” but in form “if you break these rules then program would turn into pile of goo”).
Haskell, similarly, covers significant percent of what Rust does (only the opposite, functional part: traits, type system and match, etc).
Python and Java also teach you some parts of what Rust requires, but because of their “sloppiness” certain concepts are not teached to all Python programmers (they are teached to some, obviously, if most python programmers wouldn't have used Python as statically typed language an attempt to bring type annotations to it would have been useless).