From the beginning, I invented timeless decision theory because of being skeptical that two perfectly sane and rational hyperintelligent beings with common knowledge about each other would have no choice but mutual defection in the oneshot prisoner’s dilemma. I suspected they would be able to work out Something Else Which Is Not That, so I went looking for it myself.
I don’t see how this makes the point you seem to want it to make. There’s still an equilibrium selection problem for a program game of one-shot PD—some other agent might have the program that insists (through a biased coin flip) on an outcome that’s just barely better for you than defect-defect. It’s clearly easier to coordinate on a cooperate-cooperate program equilibrium in PD or any other symmetric game, but in asymmetric games there are multiple apparently “fair” Schelling points. And even restricting to one-shot PD, the whole commitment races problem is that the agents don’t have common knowledge before they choose their programs.
I don’t see how this makes the point you seem to want it to make. There’s still an equilibrium selection problem for a program game of one-shot PD—some other agent might have the program that insists (through a biased coin flip) on an outcome that’s just barely better for you than defect-defect. It’s clearly easier to coordinate on a cooperate-cooperate program equilibrium in PD or any other symmetric game, but in asymmetric games there are multiple apparently “fair” Schelling points. And even restricting to one-shot PD, the whole commitment races problem is that the agents don’t have common knowledge before they choose their programs.