Promoted to curated: I think it’s pretty likely a huge fraction of the value of the future will be determined by the question this post is trying to answer, which is how much game theory produces natural solutions to coordination problems, or more generally how much better we should expect systems to get at coordination as they get smarter.
I don’t think I agree with everything in the post, and a few of the characterizations of updatelessness seem a bit off to me (which Eliezer points to a bit in his comment), but I still overall found reading this post quite interesting and valuable for helping me think about for which of the problems of coordination we have a more mechanistic understanding of how being smarter and better at game theory might help, and which ones we don’t have good mechanisms for, which IMO is a quite important question.
As mentioned in my answer to Eliezer, my arguments were made with that correct version of updatelessness in mind (not “being scared to learn information”, but “ex ante deciding whether to let this action depend on this information”), so they hold, according to me. But it might be true I should have stressed this point more in the main text.
Promoted to curated: I think it’s pretty likely a huge fraction of the value of the future will be determined by the question this post is trying to answer, which is how much game theory produces natural solutions to coordination problems, or more generally how much better we should expect systems to get at coordination as they get smarter.
I don’t think I agree with everything in the post, and a few of the characterizations of updatelessness seem a bit off to me (which Eliezer points to a bit in his comment), but I still overall found reading this post quite interesting and valuable for helping me think about for which of the problems of coordination we have a more mechanistic understanding of how being smarter and better at game theory might help, and which ones we don’t have good mechanisms for, which IMO is a quite important question.
Thank you, habryka!
As mentioned in my answer to Eliezer, my arguments were made with that correct version of updatelessness in mind (not “being scared to learn information”, but “ex ante deciding whether to let this action depend on this information”), so they hold, according to me.
But it might be true I should have stressed this point more in the main text.