I’m not sure what’s supposed to be tricky about this. It’s trading off a 99% chance of doing better in 1% of all worlds against a 1% chance of doing worse in 99% of all worlds (if I am in a world where the calculator malfunctioned). Being risk averse I prefer being wrong in some small fraction of the worlds to an equally small chance of being wrong in all of them so I’d want Omega to write “odd” (or even better leave it up to the counterfactual me which should have the same effect but feels better).
(Apologies for a long string of mutually-contradictory replies I made to this and then deleted. Apparently I’m not in the best shape now, and the parent comment pattern-matches to elements of the correct solution, while still not making sense on further examination. One point that’s clearly wrong is that risk-attitude matters for which solution is correct, whatever the other elements of this analysis mean.)
One point that’s clearly wrong is that risk-attitude matters for which solution is correct, whatever the other elements of this analysis mean.
I don’t see how you could possibly know that without knowing where the error in my reasoning is unless you already know with high confidence that in the correct solution the options are either nowhere close to being balanced or identical in every way anyone with consistent preferences could possibly care about. That would imply that you already know the correct solution and are just testing us. Why don’t you simply post it here (at least rot13ed)? Wouldn’t that greatly facilitate determining whether other solutions are due to misunderstandings/underspecifications of the problem statement or errors in reasoning?
I don’t see how you could possibly know that without knowing where the error in my reasoning is unless you already know with high confidence that in the correct solution the options are either nowhere close to being balanced or identical in every way anyone with consistent preferences could possibly care about.
That’s the case. Updateless analysis is pretty straightforward, see shokwave’s comment. Solving the thought experiment is not the question posed by the post, just an exercise.
(Although seeing the difficulty many readers had with interpreting the intended setup of the experiment, including a solution might have prevented such misunderstanding. Anyway, I think the description of the thought experiment is sufficiently debugged now, thanks to feedback in the comments.)
This raised by confidence that I’m right and both of you are wrong (I had updated based on your previous comment to 0.3 confidence I’m right, now I’m back to 0.8). Skokwave’s analysis would be correct if Q was different in the counterfactual world. I’m going to reply there in more detail.
Correct, assuming you’re only talking about the possible worlds included in the counterfactual (I didn’t see this assumption first, so wrote some likely incorrect comments which are now removed).
See the disclaimer in the last paragraph. The topic of the post is not how to solve the thought experiment, that must be obvious with UDT. It’s about the nature of our apparently somewhat broken intuition of observational knowledge.
Still wrong (99%/1% figures are incorrect), although maybe starting from a correct intuition.
Why has nobody posted a careful UDT analysis yet, just to see what actually goes on in the problem? I expected better, hence didn’t include such analysis myself. The topic of the post is not how to solve the thought experiment, that must be obvious with UDT. It’s about the nature of our apparently somewhat broken intuition of observational knowledge. Although at least one should clearly see the UDT analysis first, in order to discuss that.
Edit: (Although the 99% correct/1% wrong you give are the wrong figures, I wonder if I should retract this comment...)
Yes, “odd” is the correct answer, and you seem to have arrived at it by updateless analysis of the decision problem (without making logical assumptions about which answer is correct, only considering possible observations) which I disclaimed about in the last paragraph.
The question that the post poses, using this thought experiment, is not which answer is correct (we already have necessary tools to reliably tell), but what is the nature of observational knowledge, which apparently fails in this thought experiment but is a crucial element of most other reasoning, and in what sense logical knowledge is different.
(Note that this analysis doesn’t face any under-specification problems that too many of the other commenters complained about without clearly explaining what examples of relevant ambiguity remain.)
I’m not sure what’s supposed to be tricky about this. It’s trading off a 99% chance of doing better in 1% of all worlds against a 1% chance of doing worse in 99% of all worlds (if I am in a world where the calculator malfunctioned). Being risk averse I prefer being wrong in some small fraction of the worlds to an equally small chance of being wrong in all of them so I’d want Omega to write “odd” (or even better leave it up to the counterfactual me which should have the same effect but feels better).
(Apologies for a long string of mutually-contradictory replies I made to this and then deleted. Apparently I’m not in the best shape now, and the parent comment pattern-matches to elements of the correct solution, while still not making sense on further examination. One point that’s clearly wrong is that risk-attitude matters for which solution is correct, whatever the other elements of this analysis mean.)
I don’t see how you could possibly know that without knowing where the error in my reasoning is unless you already know with high confidence that in the correct solution the options are either nowhere close to being balanced or identical in every way anyone with consistent preferences could possibly care about. That would imply that you already know the correct solution and are just testing us. Why don’t you simply post it here (at least rot13ed)? Wouldn’t that greatly facilitate determining whether other solutions are due to misunderstandings/underspecifications of the problem statement or errors in reasoning?
That’s the case. Updateless analysis is pretty straightforward, see shokwave’s comment. Solving the thought experiment is not the question posed by the post, just an exercise.
(Although seeing the difficulty many readers had with interpreting the intended setup of the experiment, including a solution might have prevented such misunderstanding. Anyway, I think the description of the thought experiment is sufficiently debugged now, thanks to feedback in the comments.)
This raised by confidence that I’m right and both of you are wrong (I had updated based on your previous comment to 0.3 confidence I’m right, now I’m back to 0.8). Skokwave’s analysis would be correct if Q was different in the counterfactual world. I’m going to reply there in more detail.
Correct, assuming you’re only talking about the possible worlds included in the counterfactual (I didn’t see this assumption first, so wrote some likely incorrect comments which are now removed).
See the disclaimer in the last paragraph. The topic of the post is not how to solve the thought experiment, that must be obvious with UDT. It’s about the nature of our apparently somewhat broken intuition of observational knowledge.
Still wrong (99%/1% figures are incorrect), although maybe starting from a correct intuition.
Why has nobody posted a careful UDT analysis yet, just to see what actually goes on in the problem? I expected better, hence didn’t include such analysis myself. The topic of the post is not how to solve the thought experiment, that must be obvious with UDT. It’s about the nature of our apparently somewhat broken intuition of observational knowledge. Although at least one should clearly see the UDT analysis first, in order to discuss that.
Edit: (Although the 99% correct/1% wrong you give are the wrong figures, I wonder if I should retract this comment...)
Yes, “odd” is the correct answer, and you seem to have arrived at it by updateless analysis of the decision problem (without making logical assumptions about which answer is correct, only considering possible observations) which I disclaimed about in the last paragraph.
The question that the post poses, using this thought experiment, is not which answer is correct (we already have necessary tools to reliably tell), but what is the nature of observational knowledge, which apparently fails in this thought experiment but is a crucial element of most other reasoning, and in what sense logical knowledge is different.
(Note that this analysis doesn’t face any under-specification problems that too many of the other commenters complained about without clearly explaining what examples of relevant ambiguity remain.)