PhilGoetz comments on Ingredients of Timeless Decision Theory

• Here is what I don’t un­der­stand about the free will prob­lem. I know this is a sim­ple ob­jec­tion, so there must be a stan­dard re­ply to it; but I don’t know what that re­ply is.

Denote F as a world in which free will ex­ists, f as one in which it doesn’t. Denote B as a world in which you be­lieve in free will, and b as one in which you don’t. Let a com­bi­na­tion of the two, e.g., FB, de­note the util­ity you de­rive from hav­ing that be­lief in that world. Sup­pose FB > Fb and fb > fB (be­ing cor­rect > be­ing wrong).

The ex­pected util­ity of B is FB x p(F) + fB x (1-p(F)). Ex­pected util­ity of b is Fb x p(F) + fb x (1-p(F)). Choose b if Fb x p(F) + fb x (1-p(F)) > FB x p(F) + fB x (1-p(F)).

But, that’s not right in this case! You shouldn’t con­sider wor­lds of type f in your de­ci­sion, be­cause if you’re in one of those wor­lds, your de­ci­sion is pre-or­dained. It doesn’t make any sense to “choose” not to be­lieve in free will—that be­lief may be cor­rect, but if it is cor­rect, then you can’t choose it.

Over wor­lds of type F, the ex­pected util­ity of B is FB x p(F), and the util­ity of b is Fb x p(F), and FB > Fb. So you always choose B.

• Denote F as a world in which free will ex­ists, f as one in which it doesn’t.

I am un­able to at­tach a truth con­di­tion to these sen­tences—I can’t imag­ine two differ­ent ways that re­al­ity could be which would make the state­ments true or al­ter­na­tively false.

You shouldn’t con­sider wor­lds of type f in your de­ci­sion, be­cause if you’re in one of those wor­lds, your de­ci­sion is pre-or­dained.

http://​​wiki.less­wrong.com/​​wiki/​​Free_will_(solu­tion)

• I can’t imag­ine two differ­ent ways that re­al­ity could be which would make the state­ments true or al­ter­na­tively false.

Do you mean that the phrases “free will ex­ists” and “free will does not ex­ist” are both in­co­her­ent?

• If I want to, I can as­sign a mean­ing to “free will” in which it is tau­tolog­i­cally true of causal uni­verses as such, and ap­plied to agents, is true of some agents but not oth­ers. But you used the term, you tell me what it means to you.

• You used the term first. You called it a “dead horse” and “about as easy as a prob­lem can get and still be Con­fus­ing”. I would think this meant that you have a clear con­cept of what it means. And it can’t be a tau­tol­ogy, be­cause tau­tolo­gies are not dead horses.

I can at least say that, to me, “Free will ex­ists” im­plies “No Omega can pre­dict with cer­tainty whether I will one-box or two-box.” (This is not an “if and only if” be­cause I don’t want to say that a ran­dom pro­cess has free will; nor that an un­de­cid­able al­gorithm has free will.)

I thought about say­ing: “Free will does not ex­ist” if and only if “Con­scious­ness is epiphe­nom­e­nal”. That sounds dan­ger­ously tau­tolog­i­cal, but closer to what I mean.

I can’t think how to say any­thing more de­scrip­tive than what I wrote in my first com­ment above. I un­der­stand that say­ing there is free will seems to im­ply that I am not an al­gorithm; and that that seems to re­quire some weird spiritu­al­ism or vi­tal­ism. But that is vague and fuzzy to me; whereas it is clear that it doesn’t make sense to worry about what I should do in the wor­lds where I can’t ac­tu­ally choose what I will do. I choose to live with the vague para­dox rather than the clear-cut one.

ADDED: I should clar­ify that I don’t be­lieve in free will. I be­lieve there is no such thing. But, when choos­ing how to act, I don’t con­sider that pos­si­bil­ity, be­cause of the rea­sons I gave pre­vi­ously.

• I can at least say that, to me, “Free will ex­ists” im­plies “No Omega can pre­dict with cer­tainty whether I will one-box or two-box.”

http://​​wiki.less­wrong.com/​​wiki/​​Free_will

http://​​wiki.less­wrong.com/​​wiki/​​Freewill(solu­tion)

• All right, I read all of the non-ital­i­cized links, ex­cept for the “All posts on Less Wrong tagged Free Will”, trust­ing that one of them would say some­thing rele­vant to what I’ve said here. But alas, no.

All of those links are at­tempts to ar­gue about the truth value of “there is free will”, or about whether the con­cept of free will is co­her­ent, or about what sort of men­tal mod­els might cause some­one to be­lieve in free will.

None of those things are at is­sue here. What I am talk­ing about is what hap­pens when you are try­ing to com­pute some­thing over differ­ent pos­si­ble wor­lds, where what your com­pu­ta­tion ac­tu­ally does is differ­ent in these differ­ent wor­lds. When you must com­pare ex­pected value in pos­si­ble wor­lds in which there is no free will, to ex­pected value in pos­si­ble wor­lds in which there is free will, and then make a choice; what that choice ac­tu­ally does is not in­de­pen­dent of what pos­si­ble world you end up in. This means that you can’t ap­ply ex­pec­ta­tion-max­i­miza­tion in the usual way. The coun­ter­in­tu­itive re­sult, I think, is that you should act in the way that max­i­mizes ex­pected value given that there is free will, re­gard­less of the com­puted ex­pected value given that there is not free will.

As I men­tioned, I don’t be­lieve in free will. But I think, based on a his­tory of other con­cepts or frame­works that seemed para­dox­i­cal but were even­tu­ally worked out satis­fac­to­rily, that it’s pos­si­ble there’s some­thing to the naive no­tion of “free will”.

We have a naive no­tion of “free will” which, so far, no one has been able to con­nect up with our un­der­stand­ing of physics in a co­her­ent way. This is pow­er­ful ev­i­dence that it doesn’t ex­ist, or isn’t even a mean­ingful con­cept. It isn’t proof, how­ever; I could say the same thing about “con­scious­ness”, which as far as I can see re­ally shouldn’t ex­ist.

All at­tempts that I’ve seen so far to parse out what free will means, in­clud­ing Eliezer’s care­ful and well-writ­ten es­says linked to above, fail to no­tice­ably re­duce the prob­a­bil­ity I as­sign to there be­ing naive “free will”, be­cause the prob­a­bil­ity that there is some er­ror in the de­scrip­tion or map­ping or analo­gies made is always much higher than the very-low prior prob­a­bil­ity that I as­sign to there be­ing “free will”.

I’m not ar­gu­ing in fa­vor of free will. I’m ar­gu­ing that, when con­sid­er­ing an ac­tion to take that is con­di­tioned on the ex­is­tence of free will, you should not do the usual ex­pected-util­ity calcu­la­tions, be­cause the an­swer to the free will ques­tion de­ter­mines what it is you’re ac­tu­ally do­ing when you choose an ac­tion to take, in a way that has an asym­me­try such that, if there is any pos­si­bil­ity ep­silon > 0 that free will ex­ists, you should as­sume it ex­ists.

(BTW, I think a philoso­pher who wished to defend free will could right­fully make the blan­ket as­ser­tion against all of Eliezer’s posts that they as­sume what they are try­ing to prove. It’s pointless to start from the po­si­tion that you are an al­gorithm in a Blocks World, and ar­gue from there against free will. There’s some good stuff in there, but it’s not go­ing to con­vince some­one who isn’t already re­duc­tion­ist or de­ter­minist.)

• When you must com­pare ex­pected value in pos­si­ble wor­lds in which there is no free will, to ex­pected value in pos­si­ble wor­lds in which there is free will

I have stated ex­actly what I mean by the term “free will” and it makes this sen­tence non­sense; there is no world in which you do not have free will. And I see no way that your will could pos­si­bly be any freer than it already is. There is no pos­si­ble amend­ment to re­al­ity which you can con­sis­tently de­scribe, that would make your free will any freer than it is in our own time­less and de­ter­minis­tic (though branch­ing) uni­verse.

What do you mean by “free will” that makes your sen­tence non-non­sense? Don’t say “if we did ac­tu­ally have free will”, tell me how re­al­ity could be differ­ent.

• in our own time­less and de­ter­minis­tic (though branch­ing) uni­verse.

That’s the part I don’t buy. I’m not say­ing it’s false, but I don’t see any good rea­son to think it’s true. (I think I read the posts where you ex­plained why you be­lieve it, but I might have missed some.)

• I can’t state ex­actly what I mean by “free will”, any more than I can state ex­actly what I mean by “con­scious­ness”. No one has come up with a re­duc­tion­ist ac­count of ei­ther. But since I ac­tu­ally do be­lieve in con­scious­ness, I can’t dis­miss free will as non­sense.

A clar­ifi­ca­tion added in re­sponse to the in­stan­ta­neous orgy of down­votes: I re­al­ize that Eliezer has pro­vided a re­duc­tion­ist ex­pla­na­tion for how he thinks “free will” should be in­ter­preted, and for why peo­ple be­lieve in it. That is not what I mean. I mean that no one has come up with a re­duc­tion­ist ac­count for how what peo­ple ac­tu­ally mean by “free will” could work in the phys­i­cal world. Just as no one has come up with a re­duc­tion­ist ac­count for how what peo­ple mean by “con­scious­ness” could work in the phys­i­cal world.

If you find a rea­son to dis­agree with this, it means that you have a tremen­dously im­por­tant in­sight, and should prob­a­bly write a lit­tle com­ment to share your rev­e­la­tion with us on a re­duc­tion­ist im­ple­men­ta­tion of naive free will, or con­scious­ness.

• I can’t state ex­actly what I mean by “free will”, any more than I can state ex­actly what I mean by “con­scious­ness”. No one has come up with a re­duc­tion­ist ac­count of ei­ther.

This is not only in­cor­rect, but is in dis­mis­sive de­nial of state­ments to the op­po­site made by peo­ple in re­sponse to your ques­tions. One thing is to con­sider an ar­gu­ment in­cor­rect or to be un­will­ing to ac­cept it, an­other is to fail to un­der­stand the ar­gu­ment to the point of deny­ing its very ex­is­tence.

• You should be more spe­cific: Point out which part of my state­ment is in­cor­rect, and what state­ments I am dis­mis­sively deny­ing.

A re­duc­tion­ist ac­count of causal­ity does not count as a re­duc­tion­ist ac­count of free will. Say­ing, “The world is de­ter­minis­tic, there­fore ‘free will’ ac­tu­ally means the un­in­ter­est­ing con­cept X that is not what any­body means by ‘free will’” does not count as a de­ter­minis­tic ac­count of free will.

What I mean is that no one has pro­vided a re­duc­tion­ist ac­count of how the naive no­tion of free will could work. Not that no one has pro­vided a re­duc­tion­ist ac­count of how the world ac­tu­ally works and what “free will” maps onto in that world.

I’m also cu­ri­ous why it’s bad for me to dis­mis­sively deny state­ments made to me, but okay for you to dis­mis­sively deny my state­ments as in­cor­rect.

• What I mean is that no one has pro­vided a re­duc­tion­ist ac­count of how the naive no­tion of free will could work.

Be­cause that would be as silly as seek­ing a re­duc­tion­ist ac­count of how souls or gods could “work”—the only way you’re go­ing to get one is by ex­plain­ing how the brain tends to be­lieve these (purely men­tal) phe­nom­ena ac­tu­ally ex­ist.

Free will is just the feel­ing that more than one choice is pos­si­ble, just like a soul or a god is just the feel­ing of agency, de­tached from an ac­tual agent.

All three are de­scrip­tions of men­tal phe­nom­ena, rather than hav­ing any­thing to do with a phys­i­cal re­al­ity out­side the brain.

• Again—yes, I agree that what you say is al­most cer­tainly true. The rea­son I said that no one has pro­vided a re­duc­tion­ist ac­count of how the naive no­tion of free will could work, was to point out its similar­ity to the ques­tion of con­scious­ness, which seems as non­sen­si­cal as free will, and yet ex­ists; and thereby show that there is a pos­si­bil­ity that there is some­thing to the naive no­tion. And as long as there is some prob­a­bil­ity ep­silon > 0 of that, then we have the situ­a­tion I de­scribed above when perform­ing ex­pec­ta­tion max­i­miza­tion.

BTW, your re­sponse is an as­ser­tion, or at best an ex­plain­ing-away; not a proof.

• You shouldn’t con­sider wor­lds of type f in your de­ci­sion, be­cause if you’re in one of those wor­lds, your de­ci­sion is pre-or­dained. It doesn’t make any sense to “choose” not to be­lieve in free will—that be­lief may be cor­rect, but if it is cor­rect, then you can’t choose it.

Say­ing that you shouldn’t do some­thing be­cause it’s pre­or­dained whether you do it or not is a very con­fused way of look­ing at things. Chris­tine Kors­gaard, by whom I am nor­mally unim­pressed but who has a few quota­bles, says:

Hav­ing dis­cov­ered that my con­duct is pre­dictable, will I now sit quietly in my chair, wait­ing to see what I will do? Then I will not do any­thing but sit quietly in my chair. And that had bet­ter be what you pre­dicted, or you will have been wrong. But in any case why should I do that, if I think I ought to be work­ing?

(From “The Author­ity of Reflec­tion”)

• I don’t un­der­stand what that Kors­gaard quote is try­ing to say.

Say­ing that you shouldn’t do some­thing be­cause it’s pre­or­dained whether you do it or not is a very con­fused way of look­ing at things.

I didn’t say that. I said that, when mak­ing a choice, you shouldn’t con­sider, in your set of pos­si­ble wor­lds, pos­si­ble wor­lds in which you can’t make that choice.

It’s cer­tainly not as con­fused a way of look­ing at things as choos­ing to be­lieve that you can’t choose what to be­lieve.

I should have said you shouldn’t try to con­sider those wor­lds. If you are in f, then it may be that you will con­sider such pos­si­ble wor­lds; and there’s no should­ness about it.

“But”, you might ob­ject, “what should you do if you are a com­puter pro­gram, run­ning in a de­ter­minis­tic lan­guage on de­ter­minis­tic hard­ware?”

The an­swer is that in that case, you do what you will do. You might adopt the view that you have no free will, and you might be right.

The 2-sen­tence ver­sion of what I’m say­ing is that, if you don’t be­lieve in free will, you might be mak­ing an er­ror that you could have avoided. But if you be­lieve in free will, you can’t be mak­ing an er­ror that you could have avoided.

• I don’t un­der­stand what that Kors­gaard quote is try­ing to say.

In the con­text of the larger pa­per, the most char­i­ta­ble way of in­ter­pret­ing her (IMO) is that whether we have free will or not, we have the sub­jec­tive im­pres­sion of it, this im­pres­sion is sim­ply not go­ing any­where, and so it makes no sense to try to figure out how a lack of free will ought to in­fluence our be­hav­ior, be­cause then we’ll just sit around wait­ing for our lack of free will to pick us up out of our chair and make us wa­ter our house­plants and that’s not go­ing to hap­pen.

I said that, when mak­ing a choice, you shouldn’t con­sider, in your set of pos­si­ble wor­lds, pos­si­ble wor­lds in which you can’t make that choice.

What if we’re in a pos­si­ble world where we can’t choose not to con­sider those wor­lds? ;)

It’s cer­tainly not as con­fused a way of look­ing at things as choos­ing to be­lieve that you can’t choose what to be­lieve.

“Choos­ing to be­lieve that you can’t choose what to be­lieve” is not a way of look­ing at things; it’s a pos­si­ble state of af­fairs, in which one has a some­what self-un­der­min­ing and false be­lief. Now, be­liev­ing that one can choose to be­lieve that one can­not choose what to be­lieve is a way of look­ing at things, and might even be true. There is some ev­i­dence that peo­ple can choose to be­lieve self-un­der­min­ing false things, so be­liev­ing that one could choose to be­lieve a par­tic­u­lar self-un­der­min­ing false thing which hap­pens to have re­cur­sive bear­ing on the choice to be­lieve it isn’t so far out.

• The mis­take you’re mak­ing is that de­ter­minism does not mean your de­ci­sions are ir­rele­vant. The uni­verse doesn’t swoop in and force you to de­cide a cer­tain way even though you’d rather not. Deter­minism only means that your de­ci­sions, by be­ing part of phys­i­cal re­al­ity rather than ex­ist­ing out­side it, re­sult from the phys­i­cal events that led to them. You aren’t free to make events hap­pen with­out a cause, but you can still look at ev­i­dence and come to cor­rect con­clu­sions.

• If you can’t choose whether you be­lieve, then you don’t choose whether you be­lieve. You just be­lieve or not. The full equa­tion still cap­tures the cor­rect­ness of your be­lief, how­ever you ar­rived at it. There’s noth­ing in­con­sis­tent about think­ing that you are forced to not be­lieve and that see­ing the equa­tion is (part of) what forced you.

(I avoid the phrase “free will” be­cause there are so many differ­ent defi­ni­tions. You seem to be us­ing one that in­volves choice, while Eliezer uses one based on con­trol. As I un­der­stand it, the two of you would dis­agree about whether a TV re­mote in a de­ter­minis­tic uni­verse has free will.)

edit: miss­ing word, ex­tra word

• Brian said:

If you can’t choose whether you be­lieve, then you don’t choose whether you be­lieve. You just be­lieve or not. The full equa­tion still cap­tures the cor­rect­ness of your be­lief, how­ever you ar­rived at it. There’s noth­ing in­con­sis­tent about think­ing that you are forced to not be­lieve and that see­ing the equa­tion is (part of) what forced you.

And Ali­corn said:

What if we’re in a pos­si­ble world where we can’t choose not to con­sider those wor­lds? ;)

And be­fore ei­ther of those, I said:

“But”, you might ob­ject, “what should you do if you are a com­puter pro­gram, run­ning in a de­ter­minis­tic lan­guage on de­ter­minis­tic hard­ware?”

The an­swer is that in that case, you do what you will do. You might adopt the view that you have no free will, and you might be right.

Th­ese all seem to mean the same thing. When you try to ar­gue against what some­one said by agree­ing with him, some­one is failing to com­mu­ni­cate.

Brian, my ob­jec­tion is not based on the case fb. It’s based on the cases Fb and fB. fB is a mis­take that you had to make. Fb, “choos­ing to be­lieve that you can’t choose to be­lieve”, is a mis­take you didn’t have to make.

• Yes. I started writ­ing my re­ply be­fore Ali­corn said any­thing, took a short break, posted it, and was a bit sur­prised to see a whole dis­cus­sion had hap­pened un­der my nose.

But I don’t see how what you origi­nally said is the same as what you ended up say­ing.

At first, you said not to con­sider f be­cause there’s no point. My re­sponse was that the equa­tion cor­rectly in­cludes f re­gard­less of your abil­ity to choose based on the solu­tion.

Now you are say­ing that Fb is differ­ent from (in­fe­rior to?) fB.