True but irrelevant. In order to make an accurate prediction, Omega needs, at the very least, to simulate my decision-making faculty in all significant aspects. If my decision-making process decides to recall some particular memory, then Omega needs to simulate that memory in all significant aspects. If my decision-making process decides to wander around the room conducting physics experiments, just to be a jackass, and to peg my decision to the results of those experiments—well, then Omega will need to convincingly simulate the results of those experiments.
I’m not convinced that all that actually follows from the premises. One of the features of Newcomblike problems is that they tend to appear intuitively obvious to the people exposed to them, which suggests rather strongly to me that the intuitive answer is linked to hidden variables in personality or experience, and in most cases isn’t sensitively dependent on initial conditions.
People don’t always choose the intuitive answer, of course, but augmenting that with information about the decision-theoretic literature you’ve been exposed to, any contrarian tendencies you might have, etc. seems like it might be sufficient to achieve fine-grained predictive power without actually running a full simulation of you. The better the predictive power, of course, the more powerful the model of your decision-making process has to be, but Omega doesn’t actually have to have perfect predictive power for Newcomblike conditions to hold. It doesn’t even have to have particularly good predictive power, given the size of the payoff.
Er, I think we’re talking about two different formulations of the problem (both of which are floating around on this page, so this isn’t too surprising). In the original post, the constraint is given by P(o=award)=P(a=pay), rather than P(o=award)=qP(a=pay)+(1-q)P(a=refuse), which implies that Omega’s prediction is nearly infallible, as it usually is in problems starring Omega: any deviation from P(o=award)=0 or 1 will be due to “truly random” influences on my decision (e.g. quantum coin tosses). Also, I think the question is not “what are your intuitions?” but “what is the optimal decision for a rationalist in these circumstances?”
You seem to be suggesting that most of what determines my decision to pay or refuse could be boiled down to a few factors. I think the evidence weighs heavily against this: effect sizes in psychological studies tend to be very weak. Evidence also suggests that these kinds of cognitive processes are indeed sensitively dependent on initial conditions. Differences in the way questions are phrased, and what you’ve had on your mind lately, can have a significant impact, just to name a couple of examples.
I’m not convinced that all that actually follows from the premises. One of the features of Newcomblike problems is that they tend to appear intuitively obvious to the people exposed to them, which suggests rather strongly to me that the intuitive answer is linked to hidden variables in personality or experience, and in most cases isn’t sensitively dependent on initial conditions.
People don’t always choose the intuitive answer, of course, but augmenting that with information about the decision-theoretic literature you’ve been exposed to, any contrarian tendencies you might have, etc. seems like it might be sufficient to achieve fine-grained predictive power without actually running a full simulation of you. The better the predictive power, of course, the more powerful the model of your decision-making process has to be, but Omega doesn’t actually have to have perfect predictive power for Newcomblike conditions to hold. It doesn’t even have to have particularly good predictive power, given the size of the payoff.
Er, I think we’re talking about two different formulations of the problem (both of which are floating around on this page, so this isn’t too surprising). In the original post, the constraint is given by P(o=award)=P(a=pay), rather than P(o=award)=qP(a=pay)+(1-q)P(a=refuse), which implies that Omega’s prediction is nearly infallible, as it usually is in problems starring Omega: any deviation from P(o=award)=0 or 1 will be due to “truly random” influences on my decision (e.g. quantum coin tosses). Also, I think the question is not “what are your intuitions?” but “what is the optimal decision for a rationalist in these circumstances?”
You seem to be suggesting that most of what determines my decision to pay or refuse could be boiled down to a few factors. I think the evidence weighs heavily against this: effect sizes in psychological studies tend to be very weak. Evidence also suggests that these kinds of cognitive processes are indeed sensitively dependent on initial conditions. Differences in the way questions are phrased, and what you’ve had on your mind lately, can have a significant impact, just to name a couple of examples.