Hmm, I’m not sure this is an adequate formalization, but:
Lets assume there is an evolved population of agents. Each agent has an internal parameter p, 0<=p<=1, and implements a decision procedure p*CDT + (1-p)*EDT. That is, given a problem, the agent tosses a pseudorandom p-biased coin and decides according to either CDT or EDT, depending on the results of the toss.
Assume further that there is a test set of a hundred binary decision problems, and Omega knows the test results for every agent, and does not know anything else about them. Then Omega can estimate P(agent’s p = q | test results) and predict “one box” if the maximum likelihood estimate of p is >1/2 and “two box” otherwise. [Here I assume for the sake of argument that CDT always two-boxes.]
Given a right distribution of p-s in the population, Omega can be made to predict with any given accuracy. Yet, there appears to be no reason to one-box...
Wait, are you deriving the uselessness of UDT from the fact that the population doesn’t contain UDT? That looks circular, unless I’m missing something...
Err, no, I’m not deriving the uselessness of either decision theory here. My point is that only the “pure” Newcomb’s problem—where Omega always predicts correctly and the agent knows it—is well-defined. The “noisy” problem, where Omega is known to sometimes guess wrong, is underspecified. The correct solution (that is whether one-boxing or two-boxing is the utility maximizing move) depends on exactly how and why Omega makes mistakes. Simply saying “probability 0.9 of correct prediction” is insufficient.
But in the “pure” Newcomb’s problem, it seems to me that CDT would actually one-box, reasoning as follows:
Since Omega always predicts correctly, I can assume that it makes its predictions using a full simulation.
Then this situation in which I find myself now (making the decision in Newcomb’s problem) can be either outside or within the simulation. I have no way to know, since it would look the same to me either way.
Therefore I should decide assuming 1⁄2 probability that I am inside Omega’s simulation and 1⁄2 that I am outside.
Hmm, I’m not sure this is an adequate formalization, but:
Lets assume there is an evolved population of agents. Each agent has an internal parameter p, 0<=p<=1, and implements a decision procedure p*CDT + (1-p)*EDT. That is, given a problem, the agent tosses a pseudorandom p-biased coin and decides according to either CDT or EDT, depending on the results of the toss.
Assume further that there is a test set of a hundred binary decision problems, and Omega knows the test results for every agent, and does not know anything else about them. Then Omega can estimate
P(agent’s p = q | test results)
and predict “one box” if the maximum likelihood estimate of p is >1/2 and “two box” otherwise. [Here I assume for the sake of argument that CDT always two-boxes.]
Given a right distribution of p-s in the population, Omega can be made to predict with any given accuracy. Yet, there appears to be no reason to one-box...
Wait, are you deriving the uselessness of UDT from the fact that the population doesn’t contain UDT? That looks circular, unless I’m missing something...
Err, no, I’m not deriving the uselessness of either decision theory here. My point is that only the “pure” Newcomb’s problem—where Omega always predicts correctly and the agent knows it—is well-defined. The “noisy” problem, where Omega is known to sometimes guess wrong, is underspecified. The correct solution (that is whether one-boxing or two-boxing is the utility maximizing move) depends on exactly how and why Omega makes mistakes. Simply saying “probability 0.9 of correct prediction” is insufficient.
But in the “pure” Newcomb’s problem, it seems to me that CDT would actually one-box, reasoning as follows:
Since Omega always predicts correctly, I can assume that it makes its predictions using a full simulation.
Then this situation in which I find myself now (making the decision in Newcomb’s problem) can be either outside or within the simulation. I have no way to know, since it would look the same to me either way.
Therefore I should decide assuming 1⁄2 probability that I am inside Omega’s simulation and 1⁄2 that I am outside.
So I one-box.