One thing I read Eliezer as saying, in Dissolving the Question, is that the phenomenology of free will is more interesting than the metaphysics:
Your homework assignment is to write a stack trace of the internal algorithms of the human mind as they produce the intuitions that power the whole damn philosophical argument.
(This comment is not a full answer to that “homework assignment.”)
In other words, it is a fact that humans do reasonably reliably possess the intuition, “I have free will.” We do have that intuition; our having it is something to be explained. And it is a fact that when we examine the processes that we are made of — physics — we do not (contra Penrose) see anywhere for free will to sneak in. Brains use the same atoms that billiard balls and computers do.
(I don’t know if you are a coder. A “stack trace” is a snapshot of what is going on, at a particular moment, at every level of abstraction in a computer program. Stack traces are often seen when a program crashes, to let the programmer follow the trail of the code bug or bad data that let to the crash. We might obtain a stack trace of consciousness through introspective techniques such as meditation — I’m not there yet via that particular method, but I think I can follow the arguments.)
This is fine, if that’s the answer. But it’s not a compatibilist answer.
I take Eliezer to be (heavily) influenced by Daniel Dennett. Dennett in Elbow Room, Freedom Evolves, etc. holds that what we want out of “free will” is that we create choices to influence the future; that we can take reasoned steps to avoid predicted bad outcomes; that we could have done otherwise if we thought or believed differently. This is just as incompatible with indeterminism (wherein our seeming choices are the results of quantum indeterminacy in the neurons (Penrose) as well as with a sort of greedy mechanical determinism where our choices are produced by our bodies without conscious reflection. I take Dennett as implying that our choices are produced by our bodies, but conscious reflection is our name for the mechanism by which they are produced.
(As Eliezer points out in the Anti-Zombie posts, consciousness does have an effect on the world, notably that we talk about consciousness: we discuss our thoughts, plans, fears, dreams.)
Eliezer diagrams this in Thou Art Physics: the determinist claims that the world’s future is caused by physics operating on the past, and not by me making choices. But to the computationally minded materialist, “me” is the name of the place within physics where my choices are calculated, and “me” certainly does have quite a bit of control over the future.
I am not convinced that a materialist determinist like Sam Harris or Democritus would be convinced. The fact that I draw a line around some part of physics and call it “me” doesn’t mean I control what goes on in that boundary, after all.
(Computation is vital here, because computation and (selective) correlation are never free. In order for a computation to take place, it has to take place somewhere. In order for some outputs (say, my movement towards the cookie jar) to correlate with some inputs (my visual signals about the cookie jar), that correlation has to be processed somewhere. Plot my worldline, and I am in orbit around the cookie jar with some very complex equation modeling my path; but where that equation is actually computed in order to guide my feet, is inside my brain.)
But the reason that determinism worries freshman philosophy students and novice LWers is that it seems to imply fatalism — that the choices we make don’t matter, because the universe is scripted in advance. This compatibilist view, though, seems to say that the choices we make do matter, because they are part of how the universe calculates what the future will bring.
Fatalism says we can’t change the future, so we may as well just sit on the couch playing video games. Compatibilism says that we are the means of changing the future.
Point of order—a stack trace is not a dump of everything that’s going on, just the function call stack. It’s essentially “How did I get to here from the start of the program”.
A dump of everything would be a core dump, named after “core memory”—a very, very old memory technology.
Point of order—Your comment is not a point of order. A point of order is an interjection about process in parliamentary process. Your comment was a clarification about terminology, which does not have the precedence of a point of order.
[This is meant to be silly, not harsh; but if you want to make fussy terminological points on LW, I will do likewise...]
I am not convinced that a materialist determinist like Sam Harris or Democritus would be convinced. The fact that I draw a line around some part of physics and call it “me” doesn’t mean I control what goes on in that boundary, after all.
I guess this is my sticking point. After all, a billiard ball is a necessary link in the causal chain as well, and no less a computational nexus (albeit a much simpler one), but we don’t think that we should attribute to the ball whatever sort of authorship we wish to talk about with reference to free will. If we end up showing that we have a property (e.g. ‘being a necessary causal link’) that’s true of everything then we’ve just changed the topic and we’re no longer talking about free will. Whatever we mean by free will, we certainly mean something human beings (allegedly) have, and rocks don’t. So this does just strike me as straightforward anti-free-will determinism.
But the reason that determinism worries freshman philosophy students and novice LWers is that it seems to imply fatalism...
That may be right, and it may just be worth pointing out that determinism doesn’t imply fatalism. But in that light the intuitive grounds for fatalism seem much more interesting than the intuitive grounds for the belief in free will. I’m not entirely sure we’re naturally apt to think we have free will in any case: I don’t think anyone before the Romans ever mentioned it, and it’s not like people back then didn’t have worked out (if false) metaphysical and ethical theories.
I don’t think anyone before the Romans ever mentioned it, and it’s not like people back then didn’t have worked out (if false) metaphysical and ethical theories.
Actually, the ancient Egyptian concept of Maat seems to include free will in some sense, as a “responsibility to choose Good”, according to this excerpt. But yeah, it was not separate from ethics.
That’s really interesting, thanks for posting it. It’s an obscure sort of notion, but I agree it’s got some family resemblance to idea of free will. I guess I was thinking mostly of the absence of the idea of free will from Greek philosophy.
I guess I was thinking mostly of the absence of the idea of free will from Greek philosophy.
I took a course on ancient and medieval ethics as an undergraduate. We spent a lot of time on free will, talking about Stoic versus Epicurean views, and then how they show up in Cicero and in Thomas. My impression (as a non-expert) is that Aristotle doesn’t have a term that equates to “free will”, but that other Greek writers very much do.
You’re right, of course, that many of those philosophers wrote in Greek. I suppose I was thinking of them as hellenistic or latin, and thinking of Greek philosophers as Plato, Aristotle, and their contemporaries. But I was speaking imprecisely.
we don’t think that we should attribute to the ball whatever sort of authorship we wish to talk about with reference to free will.
That is because the billiard ball doesn’t have sufficient inner complexity and processes. I think the necessary complexity is the computational ability to a) model parts of the future world state and b) base behavior on that and c) model the modelling of this. The problem arises when your model of your model goes from iniuition (sensation of free will) to symbolic form which allows detection of the logical inconsistencies at some higher modelling level.
Actually little is needed to ascribe agency to ‘balls’. Just look at https://www.youtube.com/watch?v=sZBKer6PMtM and tell me what inner processes you infer about the ‘ball’ due to its complex interactions.
That is because the billiard ball doesn’t have sufficient inner complexity and processes.
I agree that your (a)-(c) are necessary (and maybe sufficient) conditions on having free will.
The problem arises when your model of your model goes from iniuition (sensation of free will) to symbolic form which allows detection of the logical inconsistencies at some higher modelling level.
What do you mean by this?
sensation of free will
To my knowledge, I’ve never had this sensation, so I don’t know what to say about it. So far as I understand what is meant by free will, it’s not the sort of thing of which one could have a sensation.
To my knowledge, I’ve never had this sensation, so I don’t know what to say about it. So far as I understand what is meant by free will, it’s not the sort of thing of which one could have a sensation.
Further to the other subthread, I suppose what most people mean when they talk about the sensation of free will is imagining multiple possible worlds and feeling control over which one will become actual before it does. Do you not have this?
I suppose what most people mean when they talk about the sensation of free will is imagining multiple possible worlds and feeling control over which one will become actual before it does. Do you not have this?
I wouldn’t call that a sensation or a feeling, but yes. I do think I act freely, and I can recall times when I’ve acted freely. If I don’t have free will, then I’m wrong about all that.
One thing I read Eliezer as saying, in Dissolving the Question, is that the phenomenology of free will is more interesting than the metaphysics:
(This comment is not a full answer to that “homework assignment.”)
In other words, it is a fact that humans do reasonably reliably possess the intuition, “I have free will.” We do have that intuition; our having it is something to be explained. And it is a fact that when we examine the processes that we are made of — physics — we do not (contra Penrose) see anywhere for free will to sneak in. Brains use the same atoms that billiard balls and computers do.
(I don’t know if you are a coder. A “stack trace” is a snapshot of what is going on, at a particular moment, at every level of abstraction in a computer program. Stack traces are often seen when a program crashes, to let the programmer follow the trail of the code bug or bad data that let to the crash. We might obtain a stack trace of consciousness through introspective techniques such as meditation — I’m not there yet via that particular method, but I think I can follow the arguments.)
I take Eliezer to be (heavily) influenced by Daniel Dennett. Dennett in Elbow Room, Freedom Evolves, etc. holds that what we want out of “free will” is that we create choices to influence the future; that we can take reasoned steps to avoid predicted bad outcomes; that we could have done otherwise if we thought or believed differently. This is just as incompatible with indeterminism (wherein our seeming choices are the results of quantum indeterminacy in the neurons (Penrose) as well as with a sort of greedy mechanical determinism where our choices are produced by our bodies without conscious reflection. I take Dennett as implying that our choices are produced by our bodies, but conscious reflection is our name for the mechanism by which they are produced.
(As Eliezer points out in the Anti-Zombie posts, consciousness does have an effect on the world, notably that we talk about consciousness: we discuss our thoughts, plans, fears, dreams.)
Eliezer diagrams this in Thou Art Physics: the determinist claims that the world’s future is caused by physics operating on the past, and not by me making choices. But to the computationally minded materialist, “me” is the name of the place within physics where my choices are calculated, and “me” certainly does have quite a bit of control over the future.
I am not convinced that a materialist determinist like Sam Harris or Democritus would be convinced. The fact that I draw a line around some part of physics and call it “me” doesn’t mean I control what goes on in that boundary, after all.
(Computation is vital here, because computation and (selective) correlation are never free. In order for a computation to take place, it has to take place somewhere. In order for some outputs (say, my movement towards the cookie jar) to correlate with some inputs (my visual signals about the cookie jar), that correlation has to be processed somewhere. Plot my worldline, and I am in orbit around the cookie jar with some very complex equation modeling my path; but where that equation is actually computed in order to guide my feet, is inside my brain.)
But the reason that determinism worries freshman philosophy students and novice LWers is that it seems to imply fatalism — that the choices we make don’t matter, because the universe is scripted in advance. This compatibilist view, though, seems to say that the choices we make do matter, because they are part of how the universe calculates what the future will bring.
Fatalism says we can’t change the future, so we may as well just sit on the couch playing video games. Compatibilism says that we are the means of changing the future.
Point of order—a stack trace is not a dump of everything that’s going on, just the function call stack. It’s essentially “How did I get to here from the start of the program”.
A dump of everything would be a core dump, named after “core memory”—a very, very old memory technology.
Point of order—Your comment is not a point of order. A point of order is an interjection about process in parliamentary process. Your comment was a clarification about terminology, which does not have the precedence of a point of order.
[This is meant to be silly, not harsh; but if you want to make fussy terminological points on LW, I will do likewise...]
Isn’t your comment then also not a point of order?
Muphry’s law FTW!
Thanks, that’s very helpful.
I guess this is my sticking point. After all, a billiard ball is a necessary link in the causal chain as well, and no less a computational nexus (albeit a much simpler one), but we don’t think that we should attribute to the ball whatever sort of authorship we wish to talk about with reference to free will. If we end up showing that we have a property (e.g. ‘being a necessary causal link’) that’s true of everything then we’ve just changed the topic and we’re no longer talking about free will. Whatever we mean by free will, we certainly mean something human beings (allegedly) have, and rocks don’t. So this does just strike me as straightforward anti-free-will determinism.
That may be right, and it may just be worth pointing out that determinism doesn’t imply fatalism. But in that light the intuitive grounds for fatalism seem much more interesting than the intuitive grounds for the belief in free will. I’m not entirely sure we’re naturally apt to think we have free will in any case: I don’t think anyone before the Romans ever mentioned it, and it’s not like people back then didn’t have worked out (if false) metaphysical and ethical theories.
Actually, the ancient Egyptian concept of Maat seems to include free will in some sense, as a “responsibility to choose Good”, according to this excerpt. But yeah, it was not separate from ethics.
That’s really interesting, thanks for posting it. It’s an obscure sort of notion, but I agree it’s got some family resemblance to idea of free will. I guess I was thinking mostly of the absence of the idea of free will from Greek philosophy.
I took a course on ancient and medieval ethics as an undergraduate. We spent a lot of time on free will, talking about Stoic versus Epicurean views, and then how they show up in Cicero and in Thomas. My impression (as a non-expert) is that Aristotle doesn’t have a term that equates to “free will”, but that other Greek writers very much do.
You’re right, of course, that many of those philosophers wrote in Greek. I suppose I was thinking of them as hellenistic or latin, and thinking of Greek philosophers as Plato, Aristotle, and their contemporaries. But I was speaking imprecisely.
That is because the billiard ball doesn’t have sufficient inner complexity and processes. I think the necessary complexity is the computational ability to a) model parts of the future world state and b) base behavior on that and c) model the modelling of this. The problem arises when your model of your model goes from iniuition (sensation of free will) to symbolic form which allows detection of the logical inconsistencies at some higher modelling level.
Actually little is needed to ascribe agency to ‘balls’. Just look at https://www.youtube.com/watch?v=sZBKer6PMtM and tell me what inner processes you infer about the ‘ball’ due to its complex interactions.
I agree that your (a)-(c) are necessary (and maybe sufficient) conditions on having free will.
What do you mean by this?
To my knowledge, I’ve never had this sensation, so I don’t know what to say about it. So far as I understand what is meant by free will, it’s not the sort of thing of which one could have a sensation.
Further to the other subthread, I suppose what most people mean when they talk about the sensation of free will is imagining multiple possible worlds and feeling control over which one will become actual before it does. Do you not have this?
I wouldn’t call that a sensation or a feeling, but yes. I do think I act freely, and I can recall times when I’ve acted freely. If I don’t have free will, then I’m wrong about all that.