Thanks for this post; this does seem like a risk worth highlighting.
I’ve just started reading Thomas Schelling’s 1960 book The Strategy of Conflict, and noticed a lot of ideas in chapter 2 that reminded me of many of the core ideas in this post. My guess is that that sentence is an uninteresting, obvious observation, and that Daniel and most readers were already aware (a) that many of the core ideas here were well-trodden territory in game theory and (b) that this post’s objectives were to:
highlight these ideas to people on LessWrong
highlight their potential relevance to AI risk
highlight how this interacts with updateless decision theory and acausal trade
But maybe it’d be worth people who are interested in this problem reading that chapter of The Strategy of Conflict, or other relevant work in standard academic game theory, to see if there are additional ideas there that could be fruitful here.
I’m about halfway through Strategy of Conflict and so far it’s not really giving solutions to any of these problems, just sketching out the problem space.