Also, provable performance properties are a big plus, caused by making lazy evaluation optional rather than mandatory. This aspect is explained quite well by Robert Harper:
Btw, Haskell is technically not "lazy", it's "non-strict".
Now, I think he has a good point, but I don't think think there's any general consensus within the FP community which one of non-strict/strict is "better". Personally, I don't think there's a "right" answer.
Earlier on in my career, I would have said that "non-strict/strict" should have a part of the type of a term, but after non-trivial experience with O'Caml and Lazy.t, I'm not so sure. I'm definitely sure that it lead to an absurd proliferation of incompatible interfaces.
And, as SPJ has opined, laziness forces you to be honest about side effects, which is not a trivial thing!
One thing I find interesting, is that our industry is basically a gigantic sea full of Java/C++ programmers.
Within that ocean, there is a small portion of Functional Programmers who have come to a realization that a set of finer grained abstractions would improve the industry productivity at large.
Within that small subset, you have a set of people who believe that static type systems aren't ready for widespread use or are too cumbersome for expressive programming in many cases. While the other part of the functional camp are convinced that modern type systems (usually the ML variety) are more than sufficient for expressive programming.
Within the ML static typing camp you have people who adamantly claim that in order for static type systems to be sufficiently expressive, you must have a particular kind of polymorphism, or that this kind of polymorphism must be implicit - not explicit. Or they might form a stance along the lines of strictness vs non-strictness.
Then within the strictness camp, for example, some people will form a stance that your language must be formally specified (SML) as opposed to having only a reference implementation (OCaml).
At the end of the day, we're left with a handful of people who share our exact opinion about what languages would in theory make the industry more productive.
Meanwhile, the enormous sea of industrial engineers are still using Java/C++.
It can end up looking like people debating which particular brand of natural spring water should be given to millions of people who are thirsty in the desert.
I'm not sure I'm saying anything helpful, and I totally understand having a stance on any one of these issues. I'm not accusing anyone of being too focused on these nuances. They are important questions and I'm thankful people way more studied than I am take the time to report their findings on the tradeoffs. But personally, I also try my best to not loose sight of the fact that we are in a giant sea of people who would benefit from exploring virtually any of functional paradigms/languages.
I won't claim any special knowledge, nor do I have any actual solid research to back up my intuitions.
I can certainly recognize the feeling that O'Caml makes you more productive from when I first discovered it, but that was mostly just because of algebraic datatypes. (And pattern matching which, while not terribly useful in general circumstances, is hugely useful in practical CRUD-like applications.) Polymorphic variants also made the "pro" list.
The biggest boost to my productivity I've ever felt(!) was when explicitly separating different types of effects. (Not just "pure vs. impure", but "uses-network" vs. "uses-filesystem" vs. "pure"... which is why I'm currently sticking with Haskell because it enforces that kind of discipline.
(I'm sure there'll be something better coming along any day now, but...)
Mostly, I hope that problem is mostly a lack of (appropriate) advocacy and education. There's also an absurd amount of inertia which is due to sheer entrenched interests/industries.
EDIT: ... or maybe it's a generational thing. After all this kind of thing happens in every other fast moving discipline without terribly rigorous theoretical underpinnings[1], e.g. medicine or biology.
[1] Don't get me wrong, CS is basically math which is unassailable, but we still have no idea how to (reproducibly) produce stable/well-functioning software.
(Note, not being critical of you, just placing this here because I was thinking about it recently)
Maybe because CS is so young, there is a tendency to confuse theory and practice.
Whether a particular language makes one more productive isn't math, it's engineering. When we talk about how monads might allow you to separate concerns and relieve a mental load -- we are talking engineering. When we talk about how monads compose, we are talking abut the math/science that enables the engineering. They are both related, just like mechanics is related to mechanical engineering, but they aren't the same and they have different concerns.
Be wary when you start thinking in terms of "entrenched interests". It's tempting to go down that road, but the reality is that those "entrenched interests" actually have good engineering reasons to be that way[1]. It isn't like a million other programmers haven't noticed that FP-style programming offers some benefits -- but often the benefits end up not out-weighing the drawbacks in the languages, ecosystems, and practical performance and hardware concerns.
This should be obvious, but I think people start muddying the waters -- especially when they focus on the ideological purity of their programming language. Programming languages are tools. The most popular ones are engineering tools -- and there are some not so popular ones that are tools for exploring the math behind the language itself. There are too many tradeoffs to have a language which occupies both spheres successfully.
In particular, the FP advocates go round and round on this issue. Just because something is elegant mathematically, does not mean it's good engineering practice. Haskell, for example, can be practical, but it struggles between the math and the reality of limited machines and human cognition[2]. Likewise, when you start talking about SML vs OCaml, you're talking engineering, not math -- and possibly a language tailored to engineering vs math.
You see this in languages like C++ too, where practicality starts giving way to a kind of semi-mathematical, yet totally non-scientific, dogma about how programs should be constructed based on their respective committee-designed[3] standard libraries.
[1] not always, but more than people really give others credit for.
[2] I still maintain Haskell is a write-only language, like an opposing pole to perl. Not (completely) because of the language itself, but because of the culture surrounding it which glorifies one-liner lambda calculus/laziness tricks over engineering pragmatics.
[3] e.g. compromises made in absence of any real on-the-ground engineering constraints.
I won't address all your points, but I think [1] deserves special attention: I think we can all agree that if you want to write software that will let you land a small vehicle on Mars, then you don't want/need the opinion of a theoretician, you just need $1B and a team of extremely disciplined programmers who will ADHERE TO PROCESS. Then you impose so much process that they either leave or prevail. What we're speculating about here (at least I think we are?) is if this is a sustainable model for general development and if we can do better. Even if we can't get better runtimes from FP languages, could we perhaps make programs which generate better C programs than those elite programmers that were chose for this particular mission? (I think we can. It has very little do with humans, but a lot to do with the fact that programs are very meta in that we can create programs that generate programs that generate programs ad-infinitum. If we can get our specifications right, the rest becomes trivial.)
A Mars lander program director explicitly said that he chose C because it was what he was familiar with. (I'll edit and post a link if I can find the video.) Just for context, his decision was also based solely on familiarity and experience. For him it wasn't quite so much about language, it was more about process (6 different industrial-strength linters, etc.)
> Just because something is elegant mathematically, does not mean it's good engineering practice. Haskell, for example, can be practical, but it struggles between the math and the reality of limited machines and human cognition[2]. Likewise, when you start talking about SML vs OCaml, you're talking engineering, not math -- and possibly a language tailored to engineering vs math.
That's the thing I would dispute, but it's hard to convince people who aren't already drinking the Kool-Aid, as it were. Compared to compiler-assisted reasoning about side-effects, the difference between SML and O'Caml is completely trivial.
Your [2] is just absurd :). Clearly, you don't have to understand the body/implementation of a function, just its type. :)
More seriously, I'd be interested if there's a particular experience that soured you on FP (or perhaps Haskell, in particular)...?
> What we're speculating about here (at least I think we are?) is if this is a sustainable model for general development and if we can do better.
Here is the kind of difficulty I'm talking about. Instead of thinking about this on a continuum from Aeronautics & medical (where people could die) to yet another throwaway web TODO app (e.g. "general dev"), there is this tendency to say that they are "completely different" in some way. Let's be clear, the theories do help both. Side-effect free programming is useful. But you cannot take theory and just map it directly to the real world with no caveats. Just because Haskell tracks side-effects, doesn't make it superior to C in every context.
The Mars Rover shows this clearly -- C was chosen because someone was used to it, yes. But the part you didn't catch was the implicit decision that it makes no sense to use a compiler and ecosystem you don't understand the caveats to when you need to understand all the caveats to build a successful system.
Similarly, the TODO app developer isn't using Haskell because Haskell doesn't have anywhere near the libraries Python or even Go does, and building and deploying Haskell programs is somewhat of a chore compared to those. It's no contest, Go is superior to Haskell. :-) It's also one of the reasons almost anything is superior to C/C++ in this very same domain. :-) It's not about being "entrenched", because Go is way, way younger than haskell -- it's about the priorities the language designers and developers for that language have -- ie. the culture.
> If we can get our specifications right, the rest becomes trivial.
This is what I mean about theory and practice. Note the big "if" there. I totally understand the sentiment, and I wish it were true. I even have my own meta-programming based language in the wings I'd like to release some day.
But the reason I've stalled a bit is I've never seen this work in practice because our minds (and specs) tend to paper over the devilish details. I'm not saying we shouldn't use meta-programming, I'm saying that it should not become dogma. Invariably, you get caught up in details. This kind of domain specific "meta-programming" is a great bootstrap technique, but doesn't appear to be "the way" programs should be written.
The OMeta folks created a TCP stack that compiles (almost) directly from the specification. But that was a academic exercise. How many special cases do you think they cover? Does anyone really believe that the stack in question is anything but a way to bootstrap a more robust/performant implementation later on?
Similar for the PyPy folks. Yes, you can JIT compile a python interpreter, but how many years have they worked on special cases for that, and how much farther would they have gone if they hadn't used the meta-circular approach and just addressed the real problem to begin with[1]?
> Compared to compiler-assisted reasoning about side-effects, the difference between SML and O'Caml is completely trivial.
I think as a general statement, this is true, but is a poor way of thinking about things. You're not comparing OCaml to SML, you're comparing it to C++ or Java. If all the language brings is side-effect reasoning, it's not enough, because I can add decorators to C++ code to do what you're asking.
Even with "side-effect handling" these languages don't assist you with all of the side-effects someone actually cares about. Is there a decoration for runtime speed, for memory consumption, not just for IO, but for the amount of IO? Is it even predictable? These are the real things people care about, not just whether a function peers into some global state somewhere (although that is an important thing to track, it isn't the biggest issue, IMO).
> Your [2] is just absurd :). Clearly, you don't have to understand the body/implementation of a function, just its type :)
Is this sarcastic? :-) I mean, anyone with a reasonable amount of experience knows that this is not true most of the time.
It's not completely false -- one doesn't always have to look at the implementation of the operating system facilities or even the standard library. But in code that is less than tangential to what you're working on, you certainly do end up having to understand how it's implemented. As a developer, you spend more time reading code others wrote than writing it.
> More seriously, I'd be interested if there's a particular experience that soured you on FP (or perhaps Haskell, in particular)...?
I'm not soured on FP as a concept (which could mean many things, but I just mean state-awareness), just haskell. The <space> fn call operator combined with currying in particular seems like an advancement to rubyists[2] and those think succinctness is a virtue above all others[4], but it is an engineering disaster -- completely unreadable at the call site unless you know the arity of every function by heart.[3]
But maybe I've just read bad code... I'm not dismissing that possibility. :-)
[1] The problem with python was never optimizing plain python (look at JS as an example) or the "GIL", the problem was (stupid) performance requirements about the GIL from guido, and later on, C extension API compatibility.
[2] Do not get me started on the "domain specific language" shit-fest that ruby (and progenitors like groovy/gradle) have unleashed on the world. At least stack overflow gets money and page views from it, I guess.
[3] As much as people like to put down smalltalk (cum obj-c) keyword argument syntax, it is a revelation when you're maintaining code (and I'm sorry Swift seems likely to drop it as a default).
[4] Is Arc (PG's lisp) used anywhere significant, but for this website?
I agree especially about the Smalltalk-like keyword-syntax. People not used to it can't probably appreciate the clarity it brings to code. You understand the code better - and faster - because you can understand from each calling site what each method-call does, because it can indicate the meaning of each argument clearly and concisely enough that you actually start using it for that purpose. So to understand what a piece of code does, you mostly don't have to look up the definition of the methods being called.
It takes longer to write but makes reading/understanding much faster. When you write the code it is obvious in any language what your calls "mean", but no so obvious 3 months later.
Writing without keyword syntax is as if all our emails just referred to "him" and "her" and "they" and "there" never mentioning the proper nouns of who or what or where are we actually talking about. It is obvious everybody understands what we are saying at the time those emails are written. But for another person or you yourself trying to understand what a specific email is actually saying would be rather difficult. Relying on the POSITION of an argument, rather than the name of it in the calling context is like saying "that argument which is the 3rd".
When your language is lazy by default, is it really a trick to take advantage of that?
Do you consider this a trick?
take 1 [5..]
How about this Fibonacci definition?
fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
What about this extensible fizzbuzz example?
fizzBuzz i = if null desc then show i else desc
where desc = concat [label | (j,label) <- tags, 0 == rem i j]
tags = [ (3,"Fizz"), (5,"Buzz"), (7,"Baz") ]
main = mapM_ (putStrLn . fizzBuzz) [1..120]
Focus on why you are trying to accomplish something, not how you are accomplishing it.
Yes, these are tricks, because "lazy" means you're storing thunks and significant partial results with no outward indication of such. Just because you don't see it, doesn't mean it's not there. I could say the same about C's allowance of global state, or spurious use of recursion without a depth guard.
To your examples:
A trick, because who would ask for a constant from an infinite list that could be generated numerically in a fraction of the time?
A trick, because for random queries of large n, this is not the optimal way to compute it (the optimal way is more complex). It also entangles memory allocation and the computation. Note how haskell is "side-effect free" for fibs, yet somehow memory allocation or the time required for allocation is not considered a side-effect when I ask for "fibs !! 2000000". The C iterative solution is trivial, and will execute faster.
A trick, because no one cares about fizz-buzz being extensible, and this is not particularly clear to the reader compared to an ordinary if/switch/case expression.
Can I please quote your debating which particular brand of natural spring water should be given to millions of people who are thirsty in the desert when I get a chance :-)? It is a great summary of the problem with functional programming.
As a person with academic background interested in real-world FP (using F# in my case), this is exactly why I'm not very enthusiastic about most functional programming papers that appear in conferences like ICFP - they might be solving fun problems, but I'm not convinced they are problems that actually matter if we're going to ignore the main issue.
No need to ask permission to quote me, but thanks for asking anyways. I definitely agree that shifting focus towards industrial use cases is what we need.
I guess it's more difficult than shifting the focus towards industrial use cases. Industrial use cases are a great thing, but it is something that the industry has to provide :-) (we tried to do something like that for F# here: http://manning.com/petricek2 (sorry for a shameless plug!)).
But many people contributing to functional programming (in one way or another) are in academia. They are not the best people to contribute industrial case studies - but I think there is still a lot to be done there too! The nice thing about the FRP paper is that it is really just a fun (and very simple and somewhat impractical) example, but it is nice inspiration showing (what was back then) a novel use of functional programming. Some of the more recent academic work around FP lacks this kind of creativity...
I agree that this was the situation a while ago. But, recently, I've seen very encouraging signs that the wider programming community is waking up to the value of FP. Languages like Scala and Clojure, whatever else you think about them, will hopefully act as gateway drugs.
Hi from "the sea".
I think the Functional Programmers crowd overestimates how much better languages can really improve software development.
Projects in the industry would be massively improved by: more focus on quality (eg. decoupling), technical debt awareness, enhanced communication, enhanced architecture, internal engineer mobility, more thoughts given on social dynamics and productive work environments.
It pains me to say this but supposedly "bad" languages like C++/Java (which are actually really well done) are not the bottleneck for software development. It is only a distant factor among many that lead to software being the permanent tragedy it is in our era.
Agreed mostly, but I beg to differ with your conclusion that this makes the coice of language insignificant.
The point of modern languages (name them functional or not) is to erade whole classes of bugs. Also, it is about making it simple to define good interfaces between components.
So at least your points "focus on quality" and "enhanced architecture" are directly influenced by the programming language. Of course you should train your people to focus on quality, but you also shouldn't make it too hard to get things right in the first place. Of course you should train your people to develop a sense for good architechtures, but you should also provide a formal language that makes it more natural to express their architectural and design decisions.
That way, your training can focus 100% on the real issues, rather than 5% on the real issues and 95% on how to apply them in your programming language, as you need lots of workarounds, wrapper classes and so on. And you don't only have to write them - others also have to read that bunch of mess, and reduce this to the "real point" in their heads. This may be considered a nice mental exercise, but in the end, it's just boring, prone to errors/misunderstandings, and a finally waste of time.
I think all three of these can be improved and alleviated somewhat by better languages.
Some languages most certainly have cultures that promote properly taking care of all of these as central principles.
It's a bit shortsighted to think that since projects are hard for reasons other than language choices (too), we should just ignore languages all together.
At some point that'd just mean you end up realizing you could be even better with better tools, so why not start now even though everyone hates each other and can't talk to one another?
https://existentialtype.wordpress.com/2012/08/26/yet-another...
(although he doesn't mention OCaml, and probably has other, even better future languages, in mind)