with functional programs, it is possible to ensure through type-checking that certain classes of errors cannot occur. With imperative programs, all testing can do is ensure the presence of errors (with absence of evidence being evidence of absence—but not proof of absence).
This doesn’t appear to be correct given that you can always transform functional programs into imperative programs and vice versa.
I’ve never heard that you can program in functional languages without doing testing and relying only on type checking to ensure correct behavior.
In fact, AFAIK, Haskell, the most popular pure functional programming language, is bad enough that you actually have to test all non-trivial programs for memory leaks, since it is not possible to reason except for special cases about the memory allocation behavior of a program from its source code and the language specification: the allocation behavior depends on implementation-specific and largely undocumented details of the compiler and the runtime. Anyway, this memory allocation issue may be specific to Haskell, but in general, as I understand, there is nothing in the functional paradigm that guarantees a higher level of correctness than the imperative paradigm.
I’ve never heard that you can program in functional languages without doing testing and relying only on type checking to ensure correct behavior.
“Certain classes of errors” is meant to be read as a very narrow claim, and I’m not sure that it’s relevant to AI design / moral issues. Many sorts of philosophical errors seem to be type errors, but it’s not obvious that typechecking is the only solution to that. I was primarily drawing on this bit from Programming in Scala, and in rereading it I realize that they’re actually talking about static type systems, which is an entirely separate thing. Editing.
Verifiable properties. Static type systems can prove the absence of certain run-time errors. For instance, they can prove properties like: booleans are never added to integers; private variables are not accessed from outside their class; functions are applied to the right number of arguments; only strings are ever added to a set of strings.
Other kinds of errors are not detected by today’s static type systems. For instance, they will usually not detect non-terminating functions, array bounds violations, or divisions by zero. They will also not detect that your program does not conform to its specification (assuming there is a spec, that is!). Static type systems have therefore been dismissed by some as not being very useful. The argument goes that since such type systems can only detect simple errors, whereas unit tests provide more extensive coverage, why bother with static types at all? We believe that these arguments miss the point. Although a static type system certainly cannot replace unit testing, it can reduce the number of unit tests needed by taking care of some properties that would otherwise need to be tested. Likewise, unit testing can not replace static typing. After all, as Edsger Dijkstra said, testing can only prove the presence of errors, never their absence.[14] So the guarantees that static typing gives may be simple, but they are real guarantees of a form no amount of testing can deliver.
This doesn’t appear to be correct given that you can always transform functional programs into imperative programs and vice versa.
The relevant difference is in isolation and formulation of side effects, which encourages formulation of more pieces of code whose behavior can be understood precisely in most situations. The toolset of functional programming is usually better for writing higher order code that keeps the sources of side effects abstract, so that they are put back separately, without affecting the rest of the code. As a result, a lot of code can have well-defined behavior that’s not disrupted by context in which it’s used.
This works even without types, but with types the discipline can be more systematically followed, sometimes enforced. It also becomes possible to offload some of the formulation-checking work to a compiler (even when the behavior of a piece of code is well-defined and possible to understand precisely, there is the additional step of making sure it’s used appropriately).
I’ve never heard that you can program in functional languages without doing testing and relying only on type checking to ensure correct behavior.
See Why Haskell just works. It’s obviously not magic, the point is that enough errors can be ruled out by exploiting types and relying on spare use of side effects to make a difference in practice. This doesn’t ensure correct behavior (for example, Haskell programs can always enter an infinite loop, while promising to eventually produce a value of any type, and Standard ML programs can use side effects that won’t be reflected in types). It’s just a step in the right direction, when correctness is a priority. There is also a prospect that more steps in this direction might eventually get you closer to correctness.
I used to love functional programming and the elegance of e.g. Haskell, until I realized functional programming has the philosophy exactly backwards. You want to make it easy for humans and hard for machines, not vice versa.
Human think causally, e.g. imperatively and statefully. When humans debug functional/lazy programs, they generally smuggle in stateful/causal thinking to make progress. This is a sign something is going wrong with the philosophy.
Hm. The primary reason I got interested in fp is that I really like SQL, I think it is very easy for the human mind. And LINQ is built on top of functional programming, the Gigamonkeys book buils a similar query language on top of functional programming and macros, so it seems perhaps fp should be used that way, taking it as far as possible towards making query languages in it.
But I guess it always depends on what you want to do. My philosophy of programming is automation based. That means, if I need to do something once, I do it by hand, if a thousand times I write code. This, the ability to repeat operations many times, is what makes automating human work possible and from this I derived that the most important imperative structure is the loop. The loop is what turns something that was a mere set of rules into a powerfol data processing machinery, doing an operation many more times than I care to do it. With SQL, LINQ and other queries, we are essentially optimizing the loop as such. For example the generator expression in Python is a neat little functional loop-replacement, mini-query language.
The relevant difference is in isolation and formulation of side effects, which encourages formulation of more pieces of code whose behavior can be understood precisely in most situations. The toolset of functional programming is usually better for writing higher order code that keeps the sources of side effects abstract, so that they are put back separately, without affecting the rest of the code. As a result, a lot of code can have well-defined behavior that’s not disrupted by context in which it’s used.
Yes, that’s how it was intended to be and how they spin it, but in practice the abstraction is leaky and it leaks in bad, difficult to predict ways therefore, as I said, you end up with things like having to test for memory leaks, something that is usually not an issue in “imperative” languages like Java, C# or Python.
I like the functional paradigm inside a good multi-paradigm language: passing around closures as first-class objects is much cleaner and concise than fiddling with subclasses and virtual methods, but forcing immutability and lazy evaluation as the main principles of the language doesn’t seem to be a good design choice. It forces you to jump through hoops to implement common functionality like interaction, logging or configuration, and in return it doesn’t deliver the higher modularity and intelligibility that were promised.
Agreed. Abstractions are still leaky, and where some pathologies in abstraction (i.e. human-understandable precise formulation) can be made much less of an issue by using the functional tools and types, others tend to surface that are only rarely a problem for more concrete code. In practice, the tradeoff is not one-sided, so its structure is useful for making decisions in particular cases.
I agree with everything in this comment up to:
This doesn’t appear to be correct given that you can always transform functional programs into imperative programs and vice versa.
I’ve never heard that you can program in functional languages without doing testing and relying only on type checking to ensure correct behavior.
In fact, AFAIK, Haskell, the most popular pure functional programming language, is bad enough that you actually have to test all non-trivial programs for memory leaks, since it is not possible to reason except for special cases about the memory allocation behavior of a program from its source code and the language specification: the allocation behavior depends on implementation-specific and largely undocumented details of the compiler and the runtime.
Anyway, this memory allocation issue may be specific to Haskell, but in general, as I understand, there is nothing in the functional paradigm that guarantees a higher level of correctness than the imperative paradigm.
“Certain classes of errors” is meant to be read as a very narrow claim, and I’m not sure that it’s relevant to AI design / moral issues. Many sorts of philosophical errors seem to be type errors, but it’s not obvious that typechecking is the only solution to that. I was primarily drawing on this bit from Programming in Scala, and in rereading it I realize that they’re actually talking about static type systems, which is an entirely separate thing. Editing.
Ok, sorry for being nitpicky.
In case it wasn’t clear, thanks for nitpicking, because I was confused and am not confused about that anymore.
The relevant difference is in isolation and formulation of side effects, which encourages formulation of more pieces of code whose behavior can be understood precisely in most situations. The toolset of functional programming is usually better for writing higher order code that keeps the sources of side effects abstract, so that they are put back separately, without affecting the rest of the code. As a result, a lot of code can have well-defined behavior that’s not disrupted by context in which it’s used.
This works even without types, but with types the discipline can be more systematically followed, sometimes enforced. It also becomes possible to offload some of the formulation-checking work to a compiler (even when the behavior of a piece of code is well-defined and possible to understand precisely, there is the additional step of making sure it’s used appropriately).
See Why Haskell just works. It’s obviously not magic, the point is that enough errors can be ruled out by exploiting types and relying on spare use of side effects to make a difference in practice. This doesn’t ensure correct behavior (for example, Haskell programs can always enter an infinite loop, while promising to eventually produce a value of any type, and Standard ML programs can use side effects that won’t be reflected in types). It’s just a step in the right direction, when correctness is a priority. There is also a prospect that more steps in this direction might eventually get you closer to correctness.
tangent:
I used to love functional programming and the elegance of e.g. Haskell, until I realized functional programming has the philosophy exactly backwards. You want to make it easy for humans and hard for machines, not vice versa.
Human think causally, e.g. imperatively and statefully. When humans debug functional/lazy programs, they generally smuggle in stateful/causal thinking to make progress. This is a sign something is going wrong with the philosophy.
Hm. The primary reason I got interested in fp is that I really like SQL, I think it is very easy for the human mind. And LINQ is built on top of functional programming, the Gigamonkeys book buils a similar query language on top of functional programming and macros, so it seems perhaps fp should be used that way, taking it as far as possible towards making query languages in it.
But I guess it always depends on what you want to do. My philosophy of programming is automation based. That means, if I need to do something once, I do it by hand, if a thousand times I write code. This, the ability to repeat operations many times, is what makes automating human work possible and from this I derived that the most important imperative structure is the loop. The loop is what turns something that was a mere set of rules into a powerfol data processing machinery, doing an operation many more times than I care to do it. With SQL, LINQ and other queries, we are essentially optimizing the loop as such. For example the generator expression in Python is a neat little functional loop-replacement, mini-query language.
Yes, that’s how it was intended to be and how they spin it, but in practice the abstraction is leaky and it leaks in bad, difficult to predict ways therefore, as I said, you end up with things like having to test for memory leaks, something that is usually not an issue in “imperative” languages like Java, C# or Python.
I like the functional paradigm inside a good multi-paradigm language: passing around closures as first-class objects is much cleaner and concise than fiddling with subclasses and virtual methods, but forcing immutability and lazy evaluation as the main principles of the language doesn’t seem to be a good design choice. It forces you to jump through hoops to implement common functionality like interaction, logging or configuration, and in return it doesn’t deliver the higher modularity and intelligibility that were promised.
Anyway, we are going OT.
Agreed. Abstractions are still leaky, and where some pathologies in abstraction (i.e. human-understandable precise formulation) can be made much less of an issue by using the functional tools and types, others tend to surface that are only rarely a problem for more concrete code. In practice, the tradeoff is not one-sided, so its structure is useful for making decisions in particular cases.