There’s a certain vibe in the air surrounding many discussions of decision theory. It sings: maybe the central insight of game theory (that multiplayer situations are not reducible to single-player ones) is wrong. Maybe the slightly-asymmetrized Prisoner’s Dilemma has a single right answer. Maybe you can get a unique solution to dividing a cake by majority vote if each individual player’s reasoning is “correct enough”.
Could you clarify what you mean here? AFAICT, updateless/timeless decision theory does not actually dissolve the problem of strategic behavior. For instance, the cooperative solution to the one-shot PD is only stable under fairly specific conditions.
Even if you are right, it may still be worthwhile to understand how exactly the UDT/TDT approach goes wrong. After all, finding the error in his purported disproof of Cantor’s theorem presumably helped Child Eliezer gain some sort of insight into basic set theory.
AFAICT, updateless/timeless decision theory does not actually dissolve the problem of strategic behavior.
It doesn’t, but there seems to be a widespread hope that some more advanced decision theory will succeed at that task. Or maybe I’m misreading that hope.
Could you clarify what you mean here? AFAICT, updateless/timeless decision theory does not actually dissolve the problem of strategic behavior. For instance, the cooperative solution to the one-shot PD is only stable under fairly specific conditions.
Even if you are right, it may still be worthwhile to understand how exactly the UDT/TDT approach goes wrong. After all, finding the error in his purported disproof of Cantor’s theorem presumably helped Child Eliezer gain some sort of insight into basic set theory.
It doesn’t, but there seems to be a widespread hope that some more advanced decision theory will succeed at that task. Or maybe I’m misreading that hope.
I seek a better conceptual foundation that would allow talking about ethics more rigorously, for example.