Eliezer’s solution is to say, to give it the strongest interpretation I can, “us being determined by physics doesn’t make us not us. Therefore if we seemed to have free will before figuring out physics, we have free will with it too.” This is like approaching the heap problem by saying “I know when it’s a heap by looking at it, so there’s no problem with saying (thing X) is a heap.” Approaching the problem from “below” would be an argument like “a deterministic object like a billiard ball doesn’t seem to have free will, so we don’t either.”
Like in the heap problem, there’s a fundamental divide that wasn’t addressed. Dissolving the problem should involve asking the question “what do we mean when we say “free will?”,” and trying to answer as well as Yvain did about disease.
It might be helpful to give away some of my thoughts (and probably someone else’s): one thing free will means is “unpredictable.” But there’s no problem with having unpredictable objects in the real world, and not just by quantum-mechanical randomness, which doesn’t seem much like free will. You can have objects where the quickest way to predict them is to just watch them run. Humans are such objects—there’s no way to predict a human with 100% certainty except to watch them. Two pieces of metal can also make such an object,so obviously there are a few other parts of the definition of free will. But I think unpredictability is what a lot of people see missing in the real world (or, more philosophically, in a deterministic universe) that causes them to reject free will, so it’s a good one to share.
EDIT: Apparently the unpredictable thing may have been thought first by Daniel Dennet, though he seems to use it as a thing by itself rather than one part of a definition. Also, I edited the first paragraph slightly to better translate things into the heap problem.
Edit Two: If whoever downvoted simple stuff like this (or someone who wants to express objections in their stead) wants to reply, that would be nice of them.
In “Paper, Stone, Scissors,” like in other contests and conflicts, (and same as in humour), you just need to be unpredicted, not really to be “unpredictable”. True complete unpredictability is neither good humour (“Two men walk into a bar, then the moon exploded. Why aren’t you laughing?”), nor good gaming (“My rocket-launcher defeats your paper, your stone and your scissors”), nor good storytelling (“The killer was this guy that had never appeared, and you could have never guessed at, and which were were never clued about”).
Sure, it would be dull if everyone predicted everything everyone else did; but that’s different to being capable of being predicted in the theoretical/philosophical sense that was being discussed—in the sense of existing inside a deterministic universe, and that we theoretically could predict other people’s behaviours.
What I am struggling with here is an intuition that the whole idea of unpredictability in “the theoretical/philosophical sense” is a bad, ill-formed idea. I know roughly what it means to have predictability as a two-place predicate. P(E, A) means that person A (a person equipped with the theory and empirical information that A has) is capable of predicting event E. Fine. But now how do we turn that into a one-place predicate. Do we define:
P1(E) == Forall persons A . P(E,A)
or is it
P1(E) == Forall physically possible persons A . P(E,A)
or is it
P1(E) == For some hypothetical omniscient person A . P(E,A)
or is it something more complicated, involving light cones and levels of knowledge that are still supernatural.
The thing is, even if you are able to come up with a precise definition, my intuition makes me doubt that anything so contrived could be of any possible use in a philosophical enquiry.
You appear to be conflating random and unpredictable. A double pendulum is not random, in the typical sense, its course is merely unknown. You can be governed by your own purposes and still be unpredictable to someone else, not in the sense that you go out of your way to defy all predictions, but in the sense that such predictions are never totally accurate—the fastest way to find out what a human will do with 100% accuracy is to watch them.
If unpredictability is part of free will, then I don’t want free will.
This is logically rude. You must judge on the whole of consequences, and accept or reject any argument only based on its validity, without singling out particular detail.
No it doesn’t. Fortunately. Otherwise my solution to Newcomb’s problem would be “Forget the damn boxes. I’m hunting down Omega, killing him and freeing the will of every creature in the universe!”
Omega could find something to say to you that you would disregard even though you knew it was a vitally important truth. Omega could tell Ghandi things that would make him kill someone. To Omega, you are as complicated as game of billiards. If you asked Omega if you had free will, Omega would say “no,” because games of billiards do not have free will. And Omega would be right, because Omega is always right.
Fortunately, Omega is unphysical.
But really, you’re free to your definition of free will, so long as we’re both just going by intuition. I don’t want to commit the typical mind fallacy too hard, here. It’s just that my intuition thinks that a creature that can be perfectly predicted and therefore manipulated by Omega doesn’t feel free-willed.
The comment’s parent and descriptions of Newcomb’s Problem.
I don’t think this line of questioning is serving you. You don’t want to challenge the obvious logical implications of your ‘unpredictable’ partial definition. They are hard to deny but don’t technically rule it out. Instead you want to question just where my own definition of ‘Free Will’ comes from if not my intuition. That, if followed through, would require appeals to authority, etc.
I would actually not argue too hard on the point of what the ‘true’ definition of Free Will is. The point that I do consider important is the assertion “If the concept Free Will requires unpredictability then it is stupid and pointless and should be discarded entirely”. I already avoid the phrase myself by habit—it just confuses people.
I’m not particularly interested in serving myself, so that’s alright. I would find it interesting if you followed through to where your definition of free will comes from. By “premises” I meant a more formal list, coming from tracing your logic.
I’m still finding this pretty interesting in part because it’s highlighting that I was prey to the typical mind fallacy. Apparently some other people don’t find it at all problematic to free will if their life is written down ahead of time, and some people do! But I still don’t know what these other people (yes, you!) do find problematic, or if they just avoid that thought.
A note: I thought this was obvious, but after some thought it may be good to mention anyhow. Killing Omega will not restore free will. Unless Omega is itself responsible for the structure of the universe—which is what my definition cares about.
Disclaimer 1: I didn’t downvote your comment.
Disclaimer 2: I have only quickly skimmed Eliezer’s take on the free will question, since it includes part of the Quantum Physics sequence which I intend to read as a whole and without hurry. But I didn’t spot anything that conflicted with my take on it, and I would be very surprised if that were the case since it’s basically a matter of epistemic hygiene.
I think you’re falling into the assumption that just because people use a term a lot, that term must have some unique value, even if its borders are fuzzy (hence your comparisons to “heap” and “disease”). But that is not always the case. Free will is supposed to describe an objective property of ourselves—either you have it or you don’t, true or false tertium non datur—but is there any concept of how Universe[PeopleHaveFreeWill] and Universe[PeopleHaveNoFreeWill] would look different to us (or to anyone else, full brain scanner included)? No, there isn’t. We cannot imagine the experience of a world where our HasFreeWill boolean variable has been flipped (whatever its value used to be!), any more than we can imagine the experience of a world where we are dead. As a predicate, “free will” is a complete and utter failure.
So where does the flatus vocis “free will” come from, then? (That question, which is more historical than philosophical, always has an answer, even if the term is a delusion that pretends to be a reality, e.g. “soul”) Here’s how I put it: “‘Free will’ means ‘what decisional brain activity looks like from the inside’”. That’s where I spot the seed of meaningfulness in the term, and the less rigorous usage started when people tried to connect it to the difficulties of cosmology—at first God’s puppeteering, and later the alienness of physics (I suppose I could say “Free will is an illusion of the self” if I didn’t hate to sound like a street corner preacher). If you try the straight replacement, the usual statements and questions about free will generally appear to be either trivial or nonsensical—and yes, I’m aware that that doesn’t prove anything on its own.
Ah, right. The good ol’ “the only consistent meaning of ‘free will’ is ‘what humans do’” approach.
However, I think that it IS possible to imagine how it matters if PeopleHaveFreeWill=false (though it’s quite difficult to visualize it from inside—I can only imagine “toning down” the free will by eliminating certain desiderata). Imagine that Laplace’s demon could exist, and it wrote down the story of your life in a book when you were born. Someone else could read the book and know exactly what you do next year. My intuition doesn’t think this sounds like free will.
Or imagine a universe where all your decisions were completely random. That doesn’t sound like free will either, right? But all your (note: my definition of “your,” i.e. “the measured you”) decisions are random, to the extent that a muon could come screaming out of the atmosphere and make your brain misfire at any time.
So if free will is really poorly defined (and it is), then the simple definition that makes sense is “what humans do;” importantly this definition agrees with our intuition that we have free will. However, if our intuition is allowed to speculate a bit more, we can think up scenarios where we might not have free will. But this contradicts the intuition from two sentences ago that we definitely have free will! What I am trying to demonstrate is that there is a problem after all, and it is in the murky way in which our intuition handles the question “does X have free will?” If the problem is really dealt with, we should end up understanding how our intuition works here, at least to a large degree. That’s why I think Yvain’s post is a good model.
New idea: Laplace’s demon slasher movie: I know what you did next summer!
Someone else could read the book and know exactly what you do next year. My intuition doesn’t think this sounds like free will. Or imagine a universe where all your decisions were completely random. That doesn’t sound like free will either, right?
So, you suddenly realise you live in either of those universes and go “oh, well, I have no free will”.
Does that imply anything for you? Do you start behaving any differently? Is there any practical conclusion that you would reach in both of those universes that you wouldn’t in one where you had free will (which shouldn’t exist since you ruled out both determinism and non-determinism, but we’ll allow it since the lack of a counterfactual would also make free will meaningless)? Emphasis on ‘both’ - there are interesting consequences to determinism and non-determinism, but you need free will to be the discriminating factor for the concept to be worth existing.
(As a side note, my “intuitive answers” aren’t the same as yours, but I won’t bring them up since I’m arguing that everyone’s “intuitive answers” are just non-answers to a non-question.)
Well, it would certainly shake up my morality a bit, which would then change my actions. My ideas of punishment and reward would become more utilitarian as I held people less “responsible” for doing good or bad, for example.
However, if you’re asking “what would be different if you’d been living in that universe all along and never found out,” I must admit I can’t think of anything. Wait, nevermind. “The bell inequalities wouldn’t be violated.” Or “fermions wouldn’t be identical particles.” “Arithmetic would be inconsistent.” But it’s possible to imagine “just so” theories that would fit observations without having much free will. I wouldn’t say a Boltzmann brain has free will in the second before it boils away into the plasma.
Still, I think Occam’s razor helps rule that stuff out. I’ll have to think about it more.
Eliezer’s solution is to say, to give it the strongest interpretation I can, “us being determined by physics doesn’t make us not us. Therefore if we seemed to have free will before figuring out physics, we have free will with it too.” This is like approaching the heap problem by saying “I know when it’s a heap by looking at it, so there’s no problem with saying (thing X) is a heap.” Approaching the problem from “below” would be an argument like “a deterministic object like a billiard ball doesn’t seem to have free will, so we don’t either.”
Like in the heap problem, there’s a fundamental divide that wasn’t addressed. Dissolving the problem should involve asking the question “what do we mean when we say “free will?”,” and trying to answer as well as Yvain did about disease.
It might be helpful to give away some of my thoughts (and probably someone else’s): one thing free will means is “unpredictable.” But there’s no problem with having unpredictable objects in the real world, and not just by quantum-mechanical randomness, which doesn’t seem much like free will. You can have objects where the quickest way to predict them is to just watch them run. Humans are such objects—there’s no way to predict a human with 100% certainty except to watch them. Two pieces of metal can also make such an object,so obviously there are a few other parts of the definition of free will. But I think unpredictability is what a lot of people see missing in the real world (or, more philosophically, in a deterministic universe) that causes them to reject free will, so it’s a good one to share.
EDIT: Apparently the unpredictable thing may have been thought first by Daniel Dennet, though he seems to use it as a thing by itself rather than one part of a definition. Also, I edited the first paragraph slightly to better translate things into the heap problem.
Edit Two: If whoever downvoted simple stuff like this (or someone who wants to express objections in their stead) wants to reply, that would be nice of them.
If unpredictability is part of free will, then I don’t want free will.
I want to be governed by my own purposes—I don’t want my behaviour to be random and unpredictable.
Even when playing Paper, Stone, Scissors?
I think that when the word ‘unpredictable’ is used, it is important to specify: unpredictable by whom?
In “Paper, Stone, Scissors,” like in other contests and conflicts, (and same as in humour), you just need to be unpredicted, not really to be “unpredictable”. True complete unpredictability is neither good humour (“Two men walk into a bar, then the moon exploded. Why aren’t you laughing?”), nor good gaming (“My rocket-launcher defeats your paper, your stone and your scissors”), nor good storytelling (“The killer was this guy that had never appeared, and you could have never guessed at, and which were were never clued about”).
Sure, it would be dull if everyone predicted everything everyone else did; but that’s different to being capable of being predicted in the theoretical/philosophical sense that was being discussed—in the sense of existing inside a deterministic universe, and that we theoretically could predict other people’s behaviours.
A good analysis.
What I am struggling with here is an intuition that the whole idea of unpredictability in “the theoretical/philosophical sense” is a bad, ill-formed idea. I know roughly what it means to have predictability as a two-place predicate. P(E, A) means that person A (a person equipped with the theory and empirical information that A has) is capable of predicting event E. Fine. But now how do we turn that into a one-place predicate. Do we define:
P1(E) == Forall persons A . P(E,A)
or is it
P1(E) == Forall physically possible persons A . P(E,A)
or is it
P1(E) == For some hypothetical omniscient person A . P(E,A)
or is it something more complicated, involving light cones and levels of knowledge that are still supernatural.
The thing is, even if you are able to come up with a precise definition, my intuition makes me doubt that anything so contrived could be of any possible use in a philosophical enquiry.
You appear to be conflating random and unpredictable. A double pendulum is not random, in the typical sense, its course is merely unknown. You can be governed by your own purposes and still be unpredictable to someone else, not in the sense that you go out of your way to defy all predictions, but in the sense that such predictions are never totally accurate—the fastest way to find out what a human will do with 100% accuracy is to watch them.
This is logically rude. You must judge on the whole of consequences, and accept or reject any argument only based on its validity, without singling out particular detail.
No it doesn’t. Fortunately. Otherwise my solution to Newcomb’s problem would be “Forget the damn boxes. I’m hunting down Omega, killing him and freeing the will of every creature in the universe!”
Major depressio time:
Omega could find something to say to you that you would disregard even though you knew it was a vitally important truth. Omega could tell Ghandi things that would make him kill someone. To Omega, you are as complicated as game of billiards. If you asked Omega if you had free will, Omega would say “no,” because games of billiards do not have free will. And Omega would be right, because Omega is always right.
Fortunately, Omega is unphysical.
But really, you’re free to your definition of free will, so long as we’re both just going by intuition. I don’t want to commit the typical mind fallacy too hard, here. It’s just that my intuition thinks that a creature that can be perfectly predicted and therefore manipulated by Omega doesn’t feel free-willed.
I am not going by my intuition.
Because your argument from the implications for Newcomb’s problem is so empirical :D
It is quite clearly deductive, not empirical.
What are your premises, and where did they come from?
The comment’s parent and descriptions of Newcomb’s Problem.
I don’t think this line of questioning is serving you. You don’t want to challenge the obvious logical implications of your ‘unpredictable’ partial definition. They are hard to deny but don’t technically rule it out. Instead you want to question just where my own definition of ‘Free Will’ comes from if not my intuition. That, if followed through, would require appeals to authority, etc.
I would actually not argue too hard on the point of what the ‘true’ definition of Free Will is. The point that I do consider important is the assertion “If the concept Free Will requires unpredictability then it is stupid and pointless and should be discarded entirely”. I already avoid the phrase myself by habit—it just confuses people.
I’m not particularly interested in serving myself, so that’s alright. I would find it interesting if you followed through to where your definition of free will comes from. By “premises” I meant a more formal list, coming from tracing your logic.
I’m still finding this pretty interesting in part because it’s highlighting that I was prey to the typical mind fallacy. Apparently some other people don’t find it at all problematic to free will if their life is written down ahead of time, and some people do! But I still don’t know what these other people (yes, you!) do find problematic, or if they just avoid that thought.
A note: I thought this was obvious, but after some thought it may be good to mention anyhow. Killing Omega will not restore free will. Unless Omega is itself responsible for the structure of the universe—which is what my definition cares about.
Disclaimer 1: I didn’t downvote your comment. Disclaimer 2: I have only quickly skimmed Eliezer’s take on the free will question, since it includes part of the Quantum Physics sequence which I intend to read as a whole and without hurry. But I didn’t spot anything that conflicted with my take on it, and I would be very surprised if that were the case since it’s basically a matter of epistemic hygiene.
I think you’re falling into the assumption that just because people use a term a lot, that term must have some unique value, even if its borders are fuzzy (hence your comparisons to “heap” and “disease”). But that is not always the case. Free will is supposed to describe an objective property of ourselves—either you have it or you don’t, true or false tertium non datur—but is there any concept of how Universe[PeopleHaveFreeWill] and Universe[PeopleHaveNoFreeWill] would look different to us (or to anyone else, full brain scanner included)? No, there isn’t. We cannot imagine the experience of a world where our HasFreeWill boolean variable has been flipped (whatever its value used to be!), any more than we can imagine the experience of a world where we are dead. As a predicate, “free will” is a complete and utter failure.
So where does the flatus vocis “free will” come from, then? (That question, which is more historical than philosophical, always has an answer, even if the term is a delusion that pretends to be a reality, e.g. “soul”) Here’s how I put it: “‘Free will’ means ‘what decisional brain activity looks like from the inside’”. That’s where I spot the seed of meaningfulness in the term, and the less rigorous usage started when people tried to connect it to the difficulties of cosmology—at first God’s puppeteering, and later the alienness of physics (I suppose I could say “Free will is an illusion of the self” if I didn’t hate to sound like a street corner preacher). If you try the straight replacement, the usual statements and questions about free will generally appear to be either trivial or nonsensical—and yes, I’m aware that that doesn’t prove anything on its own.
Ah, right. The good ol’ “the only consistent meaning of ‘free will’ is ‘what humans do’” approach.
However, I think that it IS possible to imagine how it matters if PeopleHaveFreeWill=false (though it’s quite difficult to visualize it from inside—I can only imagine “toning down” the free will by eliminating certain desiderata). Imagine that Laplace’s demon could exist, and it wrote down the story of your life in a book when you were born. Someone else could read the book and know exactly what you do next year. My intuition doesn’t think this sounds like free will.
Or imagine a universe where all your decisions were completely random. That doesn’t sound like free will either, right? But all your (note: my definition of “your,” i.e. “the measured you”) decisions are random, to the extent that a muon could come screaming out of the atmosphere and make your brain misfire at any time.
So if free will is really poorly defined (and it is), then the simple definition that makes sense is “what humans do;” importantly this definition agrees with our intuition that we have free will. However, if our intuition is allowed to speculate a bit more, we can think up scenarios where we might not have free will. But this contradicts the intuition from two sentences ago that we definitely have free will! What I am trying to demonstrate is that there is a problem after all, and it is in the murky way in which our intuition handles the question “does X have free will?” If the problem is really dealt with, we should end up understanding how our intuition works here, at least to a large degree. That’s why I think Yvain’s post is a good model.
New idea: Laplace’s demon slasher movie: I know what you did next summer!
So, you suddenly realise you live in either of those universes and go “oh, well, I have no free will”.
Does that imply anything for you? Do you start behaving any differently? Is there any practical conclusion that you would reach in both of those universes that you wouldn’t in one where you had free will (which shouldn’t exist since you ruled out both determinism and non-determinism, but we’ll allow it since the lack of a counterfactual would also make free will meaningless)? Emphasis on ‘both’ - there are interesting consequences to determinism and non-determinism, but you need free will to be the discriminating factor for the concept to be worth existing.
(As a side note, my “intuitive answers” aren’t the same as yours, but I won’t bring them up since I’m arguing that everyone’s “intuitive answers” are just non-answers to a non-question.)
Well, it would certainly shake up my morality a bit, which would then change my actions. My ideas of punishment and reward would become more utilitarian as I held people less “responsible” for doing good or bad, for example.
However, if you’re asking “what would be different if you’d been living in that universe all along and never found out,” I must admit I can’t think of anything. Wait, nevermind. “The bell inequalities wouldn’t be violated.” Or “fermions wouldn’t be identical particles.” “Arithmetic would be inconsistent.” But it’s possible to imagine “just so” theories that would fit observations without having much free will. I wouldn’t say a Boltzmann brain has free will in the second before it boils away into the plasma.
Still, I think Occam’s razor helps rule that stuff out. I’ll have to think about it more.