This doesn’t appear to be correct given that you can always transform functional programs into imperative programs and vice versa.
The relevant difference is in isolation and formulation of side effects, which encourages formulation of more pieces of code whose behavior can be understood precisely in most situations. The toolset of functional programming is usually better for writing higher order code that keeps the sources of side effects abstract, so that they are put back separately, without affecting the rest of the code. As a result, a lot of code can have well-defined behavior that’s not disrupted by context in which it’s used.
This works even without types, but with types the discipline can be more systematically followed, sometimes enforced. It also becomes possible to offload some of the formulation-checking work to a compiler (even when the behavior of a piece of code is well-defined and possible to understand precisely, there is the additional step of making sure it’s used appropriately).
I’ve never heard that you can program in functional languages without doing testing and relying only on type checking to ensure correct behavior.
See Why Haskell just works. It’s obviously not magic, the point is that enough errors can be ruled out by exploiting types and relying on spare use of side effects to make a difference in practice. This doesn’t ensure correct behavior (for example, Haskell programs can always enter an infinite loop, while promising to eventually produce a value of any type, and Standard ML programs can use side effects that won’t be reflected in types). It’s just a step in the right direction, when correctness is a priority. There is also a prospect that more steps in this direction might eventually get you closer to correctness.
I used to love functional programming and the elegance of e.g. Haskell, until I realized functional programming has the philosophy exactly backwards. You want to make it easy for humans and hard for machines, not vice versa.
Human think causally, e.g. imperatively and statefully. When humans debug functional/lazy programs, they generally smuggle in stateful/causal thinking to make progress. This is a sign something is going wrong with the philosophy.
Hm. The primary reason I got interested in fp is that I really like SQL, I think it is very easy for the human mind. And LINQ is built on top of functional programming, the Gigamonkeys book buils a similar query language on top of functional programming and macros, so it seems perhaps fp should be used that way, taking it as far as possible towards making query languages in it.
But I guess it always depends on what you want to do. My philosophy of programming is automation based. That means, if I need to do something once, I do it by hand, if a thousand times I write code. This, the ability to repeat operations many times, is what makes automating human work possible and from this I derived that the most important imperative structure is the loop. The loop is what turns something that was a mere set of rules into a powerfol data processing machinery, doing an operation many more times than I care to do it. With SQL, LINQ and other queries, we are essentially optimizing the loop as such. For example the generator expression in Python is a neat little functional loop-replacement, mini-query language.
The relevant difference is in isolation and formulation of side effects, which encourages formulation of more pieces of code whose behavior can be understood precisely in most situations. The toolset of functional programming is usually better for writing higher order code that keeps the sources of side effects abstract, so that they are put back separately, without affecting the rest of the code. As a result, a lot of code can have well-defined behavior that’s not disrupted by context in which it’s used.
Yes, that’s how it was intended to be and how they spin it, but in practice the abstraction is leaky and it leaks in bad, difficult to predict ways therefore, as I said, you end up with things like having to test for memory leaks, something that is usually not an issue in “imperative” languages like Java, C# or Python.
I like the functional paradigm inside a good multi-paradigm language: passing around closures as first-class objects is much cleaner and concise than fiddling with subclasses and virtual methods, but forcing immutability and lazy evaluation as the main principles of the language doesn’t seem to be a good design choice. It forces you to jump through hoops to implement common functionality like interaction, logging or configuration, and in return it doesn’t deliver the higher modularity and intelligibility that were promised.
Agreed. Abstractions are still leaky, and where some pathologies in abstraction (i.e. human-understandable precise formulation) can be made much less of an issue by using the functional tools and types, others tend to surface that are only rarely a problem for more concrete code. In practice, the tradeoff is not one-sided, so its structure is useful for making decisions in particular cases.
The relevant difference is in isolation and formulation of side effects, which encourages formulation of more pieces of code whose behavior can be understood precisely in most situations. The toolset of functional programming is usually better for writing higher order code that keeps the sources of side effects abstract, so that they are put back separately, without affecting the rest of the code. As a result, a lot of code can have well-defined behavior that’s not disrupted by context in which it’s used.
This works even without types, but with types the discipline can be more systematically followed, sometimes enforced. It also becomes possible to offload some of the formulation-checking work to a compiler (even when the behavior of a piece of code is well-defined and possible to understand precisely, there is the additional step of making sure it’s used appropriately).
See Why Haskell just works. It’s obviously not magic, the point is that enough errors can be ruled out by exploiting types and relying on spare use of side effects to make a difference in practice. This doesn’t ensure correct behavior (for example, Haskell programs can always enter an infinite loop, while promising to eventually produce a value of any type, and Standard ML programs can use side effects that won’t be reflected in types). It’s just a step in the right direction, when correctness is a priority. There is also a prospect that more steps in this direction might eventually get you closer to correctness.
tangent:
I used to love functional programming and the elegance of e.g. Haskell, until I realized functional programming has the philosophy exactly backwards. You want to make it easy for humans and hard for machines, not vice versa.
Human think causally, e.g. imperatively and statefully. When humans debug functional/lazy programs, they generally smuggle in stateful/causal thinking to make progress. This is a sign something is going wrong with the philosophy.
Hm. The primary reason I got interested in fp is that I really like SQL, I think it is very easy for the human mind. And LINQ is built on top of functional programming, the Gigamonkeys book buils a similar query language on top of functional programming and macros, so it seems perhaps fp should be used that way, taking it as far as possible towards making query languages in it.
But I guess it always depends on what you want to do. My philosophy of programming is automation based. That means, if I need to do something once, I do it by hand, if a thousand times I write code. This, the ability to repeat operations many times, is what makes automating human work possible and from this I derived that the most important imperative structure is the loop. The loop is what turns something that was a mere set of rules into a powerfol data processing machinery, doing an operation many more times than I care to do it. With SQL, LINQ and other queries, we are essentially optimizing the loop as such. For example the generator expression in Python is a neat little functional loop-replacement, mini-query language.
Yes, that’s how it was intended to be and how they spin it, but in practice the abstraction is leaky and it leaks in bad, difficult to predict ways therefore, as I said, you end up with things like having to test for memory leaks, something that is usually not an issue in “imperative” languages like Java, C# or Python.
I like the functional paradigm inside a good multi-paradigm language: passing around closures as first-class objects is much cleaner and concise than fiddling with subclasses and virtual methods, but forcing immutability and lazy evaluation as the main principles of the language doesn’t seem to be a good design choice. It forces you to jump through hoops to implement common functionality like interaction, logging or configuration, and in return it doesn’t deliver the higher modularity and intelligibility that were promised.
Anyway, we are going OT.
Agreed. Abstractions are still leaky, and where some pathologies in abstraction (i.e. human-understandable precise formulation) can be made much less of an issue by using the functional tools and types, others tend to surface that are only rarely a problem for more concrete code. In practice, the tradeoff is not one-sided, so its structure is useful for making decisions in particular cases.