Disclaimer 1: I didn’t downvote your comment.
Disclaimer 2: I have only quickly skimmed Eliezer’s take on the free will question, since it includes part of the Quantum Physics sequence which I intend to read as a whole and without hurry. But I didn’t spot anything that conflicted with my take on it, and I would be very surprised if that were the case since it’s basically a matter of epistemic hygiene.
I think you’re falling into the assumption that just because people use a term a lot, that term must have some unique value, even if its borders are fuzzy (hence your comparisons to “heap” and “disease”). But that is not always the case. Free will is supposed to describe an objective property of ourselves—either you have it or you don’t, true or false tertium non datur—but is there any concept of how Universe[PeopleHaveFreeWill] and Universe[PeopleHaveNoFreeWill] would look different to us (or to anyone else, full brain scanner included)? No, there isn’t. We cannot imagine the experience of a world where our HasFreeWill boolean variable has been flipped (whatever its value used to be!), any more than we can imagine the experience of a world where we are dead. As a predicate, “free will” is a complete and utter failure.
So where does the flatus vocis “free will” come from, then? (That question, which is more historical than philosophical, always has an answer, even if the term is a delusion that pretends to be a reality, e.g. “soul”) Here’s how I put it: “‘Free will’ means ‘what decisional brain activity looks like from the inside’”. That’s where I spot the seed of meaningfulness in the term, and the less rigorous usage started when people tried to connect it to the difficulties of cosmology—at first God’s puppeteering, and later the alienness of physics (I suppose I could say “Free will is an illusion of the self” if I didn’t hate to sound like a street corner preacher). If you try the straight replacement, the usual statements and questions about free will generally appear to be either trivial or nonsensical—and yes, I’m aware that that doesn’t prove anything on its own.
Ah, right. The good ol’ “the only consistent meaning of ‘free will’ is ‘what humans do’” approach.
However, I think that it IS possible to imagine how it matters if PeopleHaveFreeWill=false (though it’s quite difficult to visualize it from inside—I can only imagine “toning down” the free will by eliminating certain desiderata). Imagine that Laplace’s demon could exist, and it wrote down the story of your life in a book when you were born. Someone else could read the book and know exactly what you do next year. My intuition doesn’t think this sounds like free will.
Or imagine a universe where all your decisions were completely random. That doesn’t sound like free will either, right? But all your (note: my definition of “your,” i.e. “the measured you”) decisions are random, to the extent that a muon could come screaming out of the atmosphere and make your brain misfire at any time.
So if free will is really poorly defined (and it is), then the simple definition that makes sense is “what humans do;” importantly this definition agrees with our intuition that we have free will. However, if our intuition is allowed to speculate a bit more, we can think up scenarios where we might not have free will. But this contradicts the intuition from two sentences ago that we definitely have free will! What I am trying to demonstrate is that there is a problem after all, and it is in the murky way in which our intuition handles the question “does X have free will?” If the problem is really dealt with, we should end up understanding how our intuition works here, at least to a large degree. That’s why I think Yvain’s post is a good model.
New idea: Laplace’s demon slasher movie: I know what you did next summer!
Someone else could read the book and know exactly what you do next year. My intuition doesn’t think this sounds like free will. Or imagine a universe where all your decisions were completely random. That doesn’t sound like free will either, right?
So, you suddenly realise you live in either of those universes and go “oh, well, I have no free will”.
Does that imply anything for you? Do you start behaving any differently? Is there any practical conclusion that you would reach in both of those universes that you wouldn’t in one where you had free will (which shouldn’t exist since you ruled out both determinism and non-determinism, but we’ll allow it since the lack of a counterfactual would also make free will meaningless)? Emphasis on ‘both’ - there are interesting consequences to determinism and non-determinism, but you need free will to be the discriminating factor for the concept to be worth existing.
(As a side note, my “intuitive answers” aren’t the same as yours, but I won’t bring them up since I’m arguing that everyone’s “intuitive answers” are just non-answers to a non-question.)
Well, it would certainly shake up my morality a bit, which would then change my actions. My ideas of punishment and reward would become more utilitarian as I held people less “responsible” for doing good or bad, for example.
However, if you’re asking “what would be different if you’d been living in that universe all along and never found out,” I must admit I can’t think of anything. Wait, nevermind. “The bell inequalities wouldn’t be violated.” Or “fermions wouldn’t be identical particles.” “Arithmetic would be inconsistent.” But it’s possible to imagine “just so” theories that would fit observations without having much free will. I wouldn’t say a Boltzmann brain has free will in the second before it boils away into the plasma.
Still, I think Occam’s razor helps rule that stuff out. I’ll have to think about it more.
Disclaimer 1: I didn’t downvote your comment. Disclaimer 2: I have only quickly skimmed Eliezer’s take on the free will question, since it includes part of the Quantum Physics sequence which I intend to read as a whole and without hurry. But I didn’t spot anything that conflicted with my take on it, and I would be very surprised if that were the case since it’s basically a matter of epistemic hygiene.
I think you’re falling into the assumption that just because people use a term a lot, that term must have some unique value, even if its borders are fuzzy (hence your comparisons to “heap” and “disease”). But that is not always the case. Free will is supposed to describe an objective property of ourselves—either you have it or you don’t, true or false tertium non datur—but is there any concept of how Universe[PeopleHaveFreeWill] and Universe[PeopleHaveNoFreeWill] would look different to us (or to anyone else, full brain scanner included)? No, there isn’t. We cannot imagine the experience of a world where our HasFreeWill boolean variable has been flipped (whatever its value used to be!), any more than we can imagine the experience of a world where we are dead. As a predicate, “free will” is a complete and utter failure.
So where does the flatus vocis “free will” come from, then? (That question, which is more historical than philosophical, always has an answer, even if the term is a delusion that pretends to be a reality, e.g. “soul”) Here’s how I put it: “‘Free will’ means ‘what decisional brain activity looks like from the inside’”. That’s where I spot the seed of meaningfulness in the term, and the less rigorous usage started when people tried to connect it to the difficulties of cosmology—at first God’s puppeteering, and later the alienness of physics (I suppose I could say “Free will is an illusion of the self” if I didn’t hate to sound like a street corner preacher). If you try the straight replacement, the usual statements and questions about free will generally appear to be either trivial or nonsensical—and yes, I’m aware that that doesn’t prove anything on its own.
Ah, right. The good ol’ “the only consistent meaning of ‘free will’ is ‘what humans do’” approach.
However, I think that it IS possible to imagine how it matters if PeopleHaveFreeWill=false (though it’s quite difficult to visualize it from inside—I can only imagine “toning down” the free will by eliminating certain desiderata). Imagine that Laplace’s demon could exist, and it wrote down the story of your life in a book when you were born. Someone else could read the book and know exactly what you do next year. My intuition doesn’t think this sounds like free will.
Or imagine a universe where all your decisions were completely random. That doesn’t sound like free will either, right? But all your (note: my definition of “your,” i.e. “the measured you”) decisions are random, to the extent that a muon could come screaming out of the atmosphere and make your brain misfire at any time.
So if free will is really poorly defined (and it is), then the simple definition that makes sense is “what humans do;” importantly this definition agrees with our intuition that we have free will. However, if our intuition is allowed to speculate a bit more, we can think up scenarios where we might not have free will. But this contradicts the intuition from two sentences ago that we definitely have free will! What I am trying to demonstrate is that there is a problem after all, and it is in the murky way in which our intuition handles the question “does X have free will?” If the problem is really dealt with, we should end up understanding how our intuition works here, at least to a large degree. That’s why I think Yvain’s post is a good model.
New idea: Laplace’s demon slasher movie: I know what you did next summer!
So, you suddenly realise you live in either of those universes and go “oh, well, I have no free will”.
Does that imply anything for you? Do you start behaving any differently? Is there any practical conclusion that you would reach in both of those universes that you wouldn’t in one where you had free will (which shouldn’t exist since you ruled out both determinism and non-determinism, but we’ll allow it since the lack of a counterfactual would also make free will meaningless)? Emphasis on ‘both’ - there are interesting consequences to determinism and non-determinism, but you need free will to be the discriminating factor for the concept to be worth existing.
(As a side note, my “intuitive answers” aren’t the same as yours, but I won’t bring them up since I’m arguing that everyone’s “intuitive answers” are just non-answers to a non-question.)
Well, it would certainly shake up my morality a bit, which would then change my actions. My ideas of punishment and reward would become more utilitarian as I held people less “responsible” for doing good or bad, for example.
However, if you’re asking “what would be different if you’d been living in that universe all along and never found out,” I must admit I can’t think of anything. Wait, nevermind. “The bell inequalities wouldn’t be violated.” Or “fermions wouldn’t be identical particles.” “Arithmetic would be inconsistent.” But it’s possible to imagine “just so” theories that would fit observations without having much free will. I wouldn’t say a Boltzmann brain has free will in the second before it boils away into the plasma.
Still, I think Occam’s razor helps rule that stuff out. I’ll have to think about it more.