Newcomb’s problem is widely accepted as being related to the prisoner’s dilemma. If you 2-box in Newcomb’s problem, you’ll never cooperate in (one-shot) PD, which is generally considered to have real-world applications.
This seems strange to me. It seems that someone sufficiently altruistic or utilitarian would cooperate on a one-shot PD, since it’s not a zero-sum game (except in weird hypothetical land) and that would have no bearing on what choice one might make on Newcomb’s.
Newcomb’s problem is widely accepted as being related to the prisoner’s dilemma. If you 2-box in Newcomb’s problem, you’ll never cooperate in (one-shot) PD, which is generally considered to have real-world applications.
Omega has much better mind-reading abilities than most PD participants I would think.
This seems strange to me. It seems that someone sufficiently altruistic or utilitarian would cooperate on a one-shot PD, since it’s not a zero-sum game (except in weird hypothetical land) and that would have no bearing on what choice one might make on Newcomb’s.
ETA: for some payoff matrices.