# shminux comments on Decision Theory

• Thank you for your ex­pla­na­tion! Still try­ing to un­der­stand it. I un­der­stand that there is no point ex­am­in­ing one’s al­gorithm if you already ex­e­cute it and see what it does.

What if you see that your al­gorithm leads to tak­ing the \$10 and in­stead of stop­ping there, you take the \$5?

I don’t un­der­stand that point. you say “noth­ing stops you”, but that is only pos­si­ble if you could act con­trary to your own al­gorithm, no? Which makes no sense to me, un­less the same al­gorithm gives differ­ent out­comes for differ­ent in­puts, e.g. “if I sim­ply run the al­gorithm, I take \$10, but if I ex­am­ine the al­gorithm be­fore run­ning it and then run it, I take \$5″. But it doesn’t seem like the thing you mean, so I am con­fused.

What if you ex­am­ine your al­gorithm and find that it takes the \$5 in­stead?

How can it be pos­si­ble? if your ex­am­i­na­tion of your al­gorithm is ac­cu­rate, it gives the same out­come as mind­lessly run­ning it, with is tak­ing \$10, no?

It could be the same al­gorithm that takes the \$10, but you don’t know that, in­stead you ar­rive at the \$5 con­clu­sion us­ing rea­son­ing that could be im­pos­si­ble, but that you don’t know to be im­pos­si­ble, that you haven’t de­cided yet to make im­pos­si­ble.

So your rea­son­ing is in­ac­cu­rate, in that you ar­rive to a wrong con­clu­sion about the al­gorithm out­put, right? You just don’t know where the er­ror lies, or even that there is an er­ror to be­gin with. But in this case you would ar­rive to a wrong con­clu­sion about the same al­gorithm run by a differ­ent agent, right? So there is noth­ing spe­cial about it be­ing your own al­gorithm and not some­one else’s. If so, the is­sue is re­duced to find­ing an ac­cu­rate al­gorithm anal­y­sis tool, for an al­gorithm that demon­stra­bly halts in a very short time, pro­duc­ing one of the two pos­si­ble out­comes. This seems to have lit­tle to do with de­ci­sion the­ory is­sues, so I am lost as to how this is rele­vant to the situ­a­tion.

I am clearly miss­ing some of your logic here, but I still have no idea what the miss­ing piece is, un­less it’s the liber­tar­ian free will thing, where one can act con­trary to one’s pro­gram­ming. Any fur­ther help would be greatly ap­pre­ci­ated.

• I un­der­stand that there is no point ex­am­in­ing one’s al­gorithm if you already ex­e­cute it and see what it does.

Rather there is no point if you are not go­ing to do any­thing with the re­sults of the ex­am­i­na­tion. It may be use­ful if you make the de­ci­sion based on what you ob­serve (about how you make the de­ci­sion).

you say “noth­ing stops you”, but that is only pos­si­ble if you could act con­trary to your own al­gorithm, no?

You can, for a cer­tain value of “can”. It won’t have hap­pened, of course, but you may still de­cide to act con­trary to how you act, two differ­ent out­comes of the same al­gorithm. The con­tra­dic­tion proves that you didn’t face the situ­a­tion that trig­gers it in ac­tu­al­ity, but the con­tra­dic­tion re­sults pre­cisely from de­cid­ing to act con­trary to the ob­served way in which you act, in a situ­a­tion that a pri­ori could be ac­tual, but is ren­dered coun­ter­log­i­cal as a re­sult of your de­ci­sion. If in­stead you af­firm the ob­served ac­tion, then there is no con­tra­dic­tion and so it’s pos­si­ble that you have faced the situ­a­tion in ac­tu­al­ity. Thus the “chicken rule”, play­ing chicken with the uni­verse, mak­ing the pre­sent situ­a­tion im­pos­si­ble when you don’t like it.

You don’t know that it’s in­ac­cu­rate, you’ve just run the com­pu­ta­tion and it said \$5. Maybe this didn’t ac­tu­ally hap­pen, but you are con­sid­er­ing this situ­a­tion with­out know­ing if it’s ac­tual. If you ig­nore the com­pu­ta­tion, then why run it? If you run it, you need re­sponses to all pos­si­ble re­sults, and all pos­si­ble re­sults ex­cept one are not ac­tual, yet you should be ready to re­spond to them with­out know­ing which is which. So I’m dis­cussing what you might do for the re­sult that says that you take the \$5. And in the end, the use you make of the re­sults is by choos­ing to take the \$5 or the \$10.

This map from pre­dic­tions to de­ci­sions could be any­thing. It’s triv­ial to write an al­gorithm that in­cludes such a map. Of course, if the map di­ag­o­nal­izes, then the pre­dic­tor will fail (won’t give a pre­dic­tion), but the map is your rea­son­ing in these hy­po­thet­i­cal situ­a­tions, and the fact that the map may say any­thing cor­re­sponds to the fact that you may de­cide any­thing. The map doesn’t have to be iden­tity, de­ci­sion doesn’t have to re­flect pre­dic­tion, be­cause you may write an al­gorithm where it’s not iden­tity.

• You can, for a cer­tain value of “can”. It won’t have hap­pened, of course, but you may still de­cide to act con­trary to how you act, two differ­ent out­comes of the same al­gorithm.

This con­fuses me even more. You can imag­ine act con­trary to your own al­gorithm, but the imag­in­ing differ­ent pos­si­ble out­comes is a side effect of run­ning the main al­gorithm that takes \$10. It is never the out­come of it. Or an out­come. Since you know you will end up tak­ing \$10, I also don’t un­der­stand the idea of play­ing chicken with the uni­verse. Are there any refer­ences for it?

You don’t know that it’s in­ac­cu­rate, you’ve just run the com­pu­ta­tion and it said \$5.

Wait, what? We started with the as­sump­tion that ex­am­in­ing the al­gorithm, or run­ning it, shows that you will take \$10, no? I guess I still don’t un­der­stand how

What if you see that your al­gorithm leads to tak­ing the \$10 and in­stead of stop­ping there, you take the \$5?

is even pos­si­ble, or worth con­sid­er­ing.

This map from pre­dic­tions to de­ci­sions could be any­thing.

Hmm, maybe this is where I miss some of the logic. If the pre­dic­tions are ac­cu­rate, the map is bi­jec­tive. If the pre­dic­tions are in­ac­cu­rate, you need a bet­ter al­gorithm anal­y­sis tool.

The map doesn’t have to be iden­tity, de­ci­sion doesn’t have to re­flect pre­dic­tion, be­cause you may write an al­gorithm where it’s not iden­tity.

To me this screams “get a bet­ter al­gorithm an­a­lyzer!” and has noth­ing to do with whether it’s your own al­gorithm, or some­one else’s. Can you maybe give an ex­am­ple where one ends up in a situ­a­tion where there is no ob­vi­ous al­gorithm an­a­lyzer one can ap­ply?