I don’t follow. The OP only describes one computer simulating people, and it doesn’t care if they reach a decision or not. It just performs fixed actions if they do. For a given decision, everyone by assumption already knows what the computer will do. (I assumed that each X* defined zero utility to include non-termination of the decision procedure, though I doubt the number matters.) Perhaps for this reason, my own decision procedure terminates quickly and should be easy to simulate here.
Your utility calculation determines the result of each button, and therefore which button you will press. But the likelihood of being in a simulation determines the result of your utility calculation. And which button you press determines (via the computer simulating you or not) the likelihood of being in a simulation. So your utility calculation is indirectly trying to determine its own result.
Assume you pick “sim” ⇒ calculate the probability of being a simulation conditional on picking “sim” ⇒ calculate the expected utility conditional on picking “sim” and on the calculated probabilties.
Assume you pick “don’t sim” ⇒ calculate the probability of being a simulation conditional on picking “don’t sim” ⇒ calculate the expected utility conditional on picking “don’t sim” and on the calculated probabilities.
Then just pick whichever of the two has the highest expected utility. No infinite regress there!
I guess he’s positing that you yourself might simulate the computer in order to figure out what happens.
You’re right, though; I don’t see any reason to actually do that, because you already have a sufficient specification to work out the consequences of all of your available strategies for the problem.
I don’t follow. The OP only describes one computer simulating people, and it doesn’t care if they reach a decision or not. It just performs fixed actions if they do. For a given decision, everyone by assumption already knows what the computer will do. (I assumed that each X* defined zero utility to include non-termination of the decision procedure, though I doubt the number matters.) Perhaps for this reason, my own decision procedure terminates quickly and should be easy to simulate here.
Your utility calculation determines the result of each button, and therefore which button you will press. But the likelihood of being in a simulation determines the result of your utility calculation. And which button you press determines (via the computer simulating you or not) the likelihood of being in a simulation. So your utility calculation is indirectly trying to determine its own result.
Just do it this way:
Assume you pick “sim” ⇒ calculate the probability of being a simulation conditional on picking “sim” ⇒ calculate the expected utility conditional on picking “sim” and on the calculated probabilties.
Assume you pick “don’t sim” ⇒ calculate the probability of being a simulation conditional on picking “don’t sim” ⇒ calculate the expected utility conditional on picking “don’t sim” and on the calculated probabilities.
Then just pick whichever of the two has the highest expected utility. No infinite regress there!
I guess he’s positing that you yourself might simulate the computer in order to figure out what happens.
You’re right, though; I don’t see any reason to actually do that, because you already have a sufficient specification to work out the consequences of all of your available strategies for the problem.