I know that OCaml can be really good for writing fast compilers. The Haxe language is written in OCaml and it's compiler is ridiculously fast.
Wasn't the first Rust compiler written in OCaml?
Does not really answer my question though. That goes from the obscurity of OCaml to something nobody has ever heard of, haxe.
Also interesting. Never heard of either of them. Certainly calls for investigation...
Thanks for the courses. I'll take a look.
Well, here's the list of top Haskell projects on GitHub:
Tons of stuff is written in Scala. High profile cases include Akka and Apache Spark and related stuff.
A company I work is considering a purchase of a very expensive financial system, which I recently discovered is written in scala.
You've asked a loaded question, and I hope you don't mind a ridiculously expansive answer, but to really understand what people mean by Functional Programming, you have to go back to the very origins of computers themselves.
You may have heard of the Church-Turing thesis, which is the origins of the math that underlies computers. Without getting too much into the weeds, before the Church-Turing thesis there was a notion of whether something was "effectively calculable", meaning could someone (a person, in this case) calculate a value using pen and paper methodologies. An example of something effectively calculable is 2 + 2, where an example of something not effectively calculable is the exact numerical value of pi. The notion of algorithms (repetitive mathematical steps to arrive at a final value) go much earlier than this, and before computers were machines, they were people (often women), and were used in situations like calculating the best location and angle for artillery emplacements via algorithms.
Separately, but concurrently, Alonzo Church and Alan Turing attempted to formally define what was and was not effectively calculable (there were also efforts by Kurt Godel, but he's less relevant to your particular question). Alonzo Church created lambda calculus, which was a mathematical notation of pure functions which allowed one to compute any value that actually terminated.
Alan Turing, on the other hand, approached the problem from a mechanical perspective, inventing an imaginary machine that utilizing infinite tape segments. For simplicity's sake I'll paraphrase the notion (here's the original paper -- it is worth a read), but each of these tape segments contained a single instruction of an algorithmic step and a "goto" of the location of the next tape segment which contains the next algorithmic step. Given an machine containing the algorithm steps on it's tape strip and initial input, a user of this machine would receive one instruction and one goto at a time, and by following the instructions at each segment, would eventually arrive at a final output.
What makes the Church-Turing thesis so significant is that both Turing's machine and Church's language were interchangeable -- anything one method could compute, so could the other.
What does this have to do with your question? Well, Church's lambda calculus is the first Functional Programming language (and many an academic has built a lambda calculus interpreter / compiler, so if you were insane enough, you could write code in it). But history favored Turing's approach, because while an infinite strip of tape is implausible, other mechanisms of storage are not, and the rise of Von Neumann architecture allowed the creation of something basically indistinguishable from a Turing machine that could be built in real life.
The split between imperative and functional programming is the split between mechanics and mathematics respectively. Nearly 100 years out from the Church-Turing thesis it becomes easy to think of Functional Programming as this weird academic thing that insists your code is "impure", but it's actually a lot deeper than that.
When people today talk about Functional Programming as involving Higher-Kinded Types and General Algebraic Data Types, they are talking about the latest generation of Functional Programming which has emerged from cutting-edge mathematics (particularly Logical Proof Theory, Category Theory, and Type Theory which is sometimes referred to as the Computational Trinity). A great example is linear logic which is the formal basis for Rust's ownership system. What I've noticed happening is that a fringe research language will prove out a feature, then shortly thereafter, mainstream languages also adopt it -- for example, anonymous functions are now permitted in Java and C++.
In this way, everyone eventually learns portions of Functional Programming so long as they are writing in something more abstract than assembly. The advantage of learning a strange looking "functional" programming language is that you are likely to learn techniques that are applicable to other languages, or techniques that will likely become commonplace 1-5 years in the future.
The reason Functional Programming is so desirable, is that because it's based in math, it has provable properties, and things like compiler optimizations, concurrency, or code generation becomes easier and easier the more techniques you apply.
When you are limited to a mechanical approach, which I think is best typified (see what I did there?) by C and Assembly, solutions to a given problem tend to be unable to compose, not reusable, and cannot be proven to work in the face of all edge cases. Things based in mathematics are often the reverse.
TL;DR I know there aren't any killer apps written in Idris, but I bet you've used an anonymous function or a generic type parameter, and thus have benefited from Functional Programming.
Yep, functional is great for compilers. Rust was originally written in Ocaml. F# is also a joy to work with and I recommend this book to build functional skills:
But yes, I still love C, too. I won't choose it ever again, though, because Rust is a much better C. I can have allocation-free borrowing line iterators today, no higher kinded types required. I just have to call them what they are: "unsafe". This word is not so bad. It just means "as unsafe as other languages". We strive for correctness. We can get there, with learning. And I do think functional programming is well worth understanding. Enjoy the process.
If you want to learn Haskell, I agree with @ChechyLevas 's suggestion of Learn You A Haskell as a starting point, but if monads and lazy evaluation are too much to learn at once (it was for me), I suggest How to Design Programs which is written in Racket, a descendant of Lisp, the first widely used functional programming language. It's extremely beginner friendly, well written, and very fun.
Alternatively, the first functional programming language I learned was Elixir, and I did so through this online guide.
Your answer is more than I expected, thank you very much.
And thanks to all for your help
Sadly due to the slow migration to Wayland and how Xmonad works/written, it's practially impossible to make it work under Wayland. There is alternative - Waymonad but it's dead for a while.
Thus, in the long term, you can forget about this piece of software as an example of successfull and practical Haskell project...
that's a really bad example since the only reason against the exact value of pi being calculable is that it's infinitely long. In that sense, it does not even make sense to ask whether that value is calculable or not. Calculating any digit of pi you ask for is very well possible.
Actually, 2+2 is also a terrible example of something computable. Computability is a property of sets or functions, not of values. If 2+2=4 is something you already know, you don't need to compute anything to get this result. A program "computing" 2+2 could just return "4" directly without taking any sum. On the other hand "adding two natural numbers" is a function that is computable (it's the "+" function), and using the "+" function you can also determine that 2+2=4.
That's an incorrect description of Turing machines. The instructions are not on the tape, instead the tape is intermediate storage used by the program, as well as for input and potentially output. This storage is initially containing the encoded input to the program and the rest of it is empty. The instructions instead form a finite graph / state machine and exist independent from the tape. Of course, Turing machine instructions can still be encoded in some way to be put on a machine tape, but this is only relevant in the context of something like a universal Turing machine or e. g. the halting problem
where Turing machines take other turing machines as input. (The halting problem of course also being one of the most straight-forward thing that is actually not computable linking back to my first paragraph above.)
I feel like you are making the time frame look shorter than it is. I agree with the overall observation, but it can be quite a bit longer. The example of anonymous functions you give, those have been around in some languages for decades (I would need to look this up for detailed dates), while Java adapted them still fairly "recently".
Also, saying that proven features of functional (or other) languages likely become commonplace within up to 5 years is IMO wrong. For example, I still haven't seen monads being properly usable/used in "mainstream" languages. And in general, if becoming commonplace to you means getting into common multi-paradigm languages: that will only happen to features that are less extreme and play nice with other paradigms. More extreme features can still be very much proven and valuable features of a language though.
Edit: Regarding the last point, I think you may have not actually meant that all "relevant" features of functional languages quickly become commonplace, but on the first read, at least to someone unfamiliar with functional programming it may IMO very well be interpreted like that.
Most Elm tutorial I found have a little issue: they focus on web development.
Are there any good videos/books/tutorials that are more focussing on algorithms and other stuff to learn FP in general and use the web page more like the terminal output?
The Coursera course is a bit slow paced. I have to wait until I can continue with the next part. Maybe I missed something?
Maybe to make it more clear: I'm looking now more for practical "how to use FP" instead of theoretical "what is FP".
In the meantime I don't care about the language. Could be Scala, Clojure, F# or, if it would help, JS, Java or Python. But I would prefer a (mostly) functional language over an imperative language with FP functions added later.
Something like a step by step tutorial how to solve an algorithm or any other task.
Sorry for the monologue, but I think I have found a tutorial that is exactly what I am looking for.
It describes everything step by step and the result is something usable and not only a theoretical construct. I will rewrite the code in F# and later I will port the code to Rust and maybe Elm.
The best book I know about FP is "a type of programming".
I read a lot of books about FP and some dig quite deep into theory but you've no idea how to use it ; on the other side some are very implementation oriented but it's quite hard to get a grasp of why it's structured this way.
A type of programming is really about the FP mindset (you'll learn Haskell along the way, though)
Is there a book on using FP on a resource constrained machine, like an AVR or other micro-controller? Something where memory is in short supply, any kind of run-time and garbage collector is likely off the table. Where timing determinism is important.
If so, you have my attention. If not then an FP language is not practically useful.
Which is not to say that important lessons cannot be learned from getting familiar with FP in other languages. I just which they were not so cryptic. When I look at page of Haskell I cannot recognize anything resembling a program in it.
I don't know any books, but one of my goals is to write software for AVR, ARM and maybe RISC-V in Rust, and of course in functional style. I'm curious myself if this works well.
I pretty sure most haskellers would say the same thing about super low level code.