I think paying in counterfactual mugging is actually the right decision. (In bizarre hypothetical land. In real life, don’t pay it’s a 2 tailed coin)
Here is a hypothetical. I go up to you and say.
I know omega will play counterfactual mugging with you tomorrow.
If you pay me $1 now, and then don’t pay omega, I will set your car on fire.
If you don’t pay me any money now, I will go away without setting your car on fire.
You think about that and realize. If you don’t pay me $1, then the expected value is $0. (No cars burned, no payments to or from Omega.)
But if you do pay me $1 now, then you better pay Omega $100 tomorrow if Omega asks. So Omega will predict this. E(U)=0.5*$10,000 − 0.5*$100 -$1 = $4949.
Also, the bomb example is malformed. It involved Omega predicting your actions, but also leaving you a note.
Suppose your actions are “read the note, and go for the box without the bomb.”? What does Omega do if this is your algorithm? This hypothetical as you have it seems to involve the perfect predictor Omega failing to predict the actions of CDT, EDT and VDT.
Also, for intuitions (and possibly sensible behavior), scale matters. That is, I would 1 box if the quantities were $1 and $1,000,000. But 2 box if the quantities were $999,999 and $1,000,000. This isn’t consistent with being certain of any 1 decision theory. But it is very sensible behavior if your uncertain which decision theory is true in some sense.
I think paying in counterfactual mugging is actually the right decision. (In bizarre hypothetical land. In real life, don’t pay it’s a 2 tailed coin)
Here is a hypothetical. I go up to you and say.
I know omega will play counterfactual mugging with you tomorrow.
If you pay me $1 now, and then don’t pay omega, I will set your car on fire.
If you don’t pay me any money now, I will go away without setting your car on fire.
You think about that and realize. If you don’t pay me $1, then the expected value is $0. (No cars burned, no payments to or from Omega.)
But if you do pay me $1 now, then you better pay Omega $100 tomorrow if Omega asks. So Omega will predict this. E(U)=0.5*$10,000 − 0.5*$100 -$1 = $4949.
Also, the bomb example is malformed. It involved Omega predicting your actions, but also leaving you a note.
Suppose your actions are “read the note, and go for the box without the bomb.”? What does Omega do if this is your algorithm? This hypothetical as you have it seems to involve the perfect predictor Omega failing to predict the actions of CDT, EDT and VDT.
Also, for intuitions (and possibly sensible behavior), scale matters. That is, I would 1 box if the quantities were $1 and $1,000,000. But 2 box if the quantities were $999,999 and $1,000,000. This isn’t consistent with being certain of any 1 decision theory. But it is very sensible behavior if your uncertain which decision theory is true in some sense.