You are again demonstrating what I am guessing is a lack of real
experience with Haskell, and therefore you don't know the difference
between theoryand practice. I have written a lot of code in all the
languages you mention(and many you haven't -- I've been doing this for a
very long time) andnone of the theory you spout above was remotely a
consideration. It's for academicians to debate and write papers about, but
it doesn't come up inthe real world.
Please I wrote a respectful post. No need to introduce dubious (and
frankly condescending and bit arrogant tone) ad hominen assumptions. I've
coded in many languages also over a roughly 30 year career. Haskell does
invert the type system from inductive to coinductive and this introduces
advantages and disadvantages. I won't go into all the details further here
(a link was already provided to Robert Harper's blog post). I was merely
trying to point out that reasoning about Haskell down to the "nuts and
bolts" requires understanding many other layers such as Monads (category
theory), memoization, lazy evaluation, coinductive types, etc.. It is a
high-level in the sense that is a very powerful semantics built with
I made the "guess" about your Haskell experience (not an assertion) because
you considerably overstated, in my opinion, what a programmer needs to know
in order to use Haskell to build real-world applications. And you continue
to do so above. Programmers don't "reason" about languages and programs.
They don't generate correctness proofs. I would venture another guess that
if you used the word "coinductive" to the vast majority of them, the
response would be "huh?". They write code, test it, release it, etc. They
do it by learning the sub-set of the language they need to get their job
done. This is precisely what I have done with Haskell, and it has been
enormously useful to me.
Because of the power of today's hardware, we can do a lot of useful things
even with interpreted languages, and with compiled languages that deliver
less-than-maximal performance, like Haskell and Scheme.
This formerly absolutely valid point is weakened or mitigated as mobile
eats the desktop and battery life also depends on performance. Nevertheless
the cost of the programming effort remains a variable in the equation, so
there is a balance which varies for different use cases.
I agree. That doesn't change the main point at all, which is to use the
right tool for job and by "right", I mean minimize development and
maintenance cost. Driving a Ferrari to the supermarket doesn't make a lot
But you pay a coding-time price for their use and I don't think that will
ever go away.
This is why I argue the low-level languages should be used after profiling
the code and knowing where the performance bottlenecks are.
I couldn't agree more. A good deal of the early part of my career was based
on not fixing things until you knew for sure how they were broke.
Because for example modeling memory allocation (lifetimes and scopes) in
the type system, infects the type system every where (e.g. even 'a in
generics apparently) so the code base presumably gets more unmanageable
over time because there is an exponential explosion of invariants. Because
unsafe can propagate from any untyped code (the entropy is unbounded due
to be a Turing-complete machine, i.e. not 100% dependently typed). Thus the
notion of a completely typed program is a foolish goal. It is about
tradeoffs and fencing off areas of maximum concern. The human mind still
needs to be involved.
So please don't try to hold Haskell up as the a high-level solution with
the only tradeoff being performance. The reality isn't that simple.
Read carefully please. I never said that coding ease vs. performance was
THE tradeoff. It is A tradeoff. There are certainly other considerations
that good programmers must make when choosing their tools for a particular
Edit: for example (just so you don't think I am BSing), Haskell can't
offer first class disjunctions without forsaking its global inference that
provides some of Haskell's elegance. Without first-class disjunctions
(which Rust also doesn't have), then composition via ad hoc polymorphism is
some what crippled. I suspect some of the need for higher-kind types could
be avoided if Rust had first-class unions. Which Rust could I presume
implement, because it doesn't have global inference.
I don't doubt at all that what you say above is true. What I doubt is its
immediate relevance to the real world. We have examples galore of people
Haskell and, I'm sure, Rust, would be ranked far higher by programming
language connoisseurs than any of the examples I gave. And yet those
example languages have been used to create an enormous amount of useful
software. Should we use the kind of theoretical considerations that you
seem to revel in in the design of future languages? Of course. But I don't
think it terribly relevant to making a programming-environment choice in
the world as it is today.