Thanks for your posts, Scott! This has been super interesting to follow.
Figuring out where to set the AM-GM boundary strikes me as maybe the key consideration wrt whether I should use GM—otherwise I don’t know how to use it in practical situations, plus it just makes GM feel inelegant.
From your VNM-rationality post, it seems like one way to think about the boundary is commensurability. You use AM within clusters whose members are willing to sacrifice for each other (are willing to make Kaldor-Hicks improvements, and have some common currency s.t. “K-H improvement” is well-defined; or, in another framing, have a meaningfully shared utility function) . Maybe that’s roughly the right notion to start with? But then it feels strange to me to not consider things commensurate across epistemic viewpoints, especially if those views are contained in a single person (though GM-ing across internal drives does seem plausible to me).
I’d love to see you (or someone else) explore this idea more, and share hot takes about how to pin down the questions you allude to in the AM-GM boundary section of this post: where to set this boundary, examples of where you personally would set it in different cases, and what desiderata we should have for boundary-setting eventually. (It feels plausible to me that having maximally large clusters is in some important sense the right thing to aim for).
Wow, I came here to say literally the same thing about commensurability: that perhaps AM is for what’s commensurable, and GM is for what’s incommensurable.
Though, one note is that to me it actually seems fine to consider different epistemic viewpoints as incommensurate. These might be like different islands of low K-complexity, that each get some nice traction on the world but in very different ways, and where the path between them goes through inaccessibly-high K-complexity territory.
Thanks for your posts, Scott! This has been super interesting to follow.
Figuring out where to set the AM-GM boundary strikes me as maybe the key consideration wrt whether I should use GM—otherwise I don’t know how to use it in practical situations, plus it just makes GM feel inelegant.
From your VNM-rationality post, it seems like one way to think about the boundary is commensurability. You use AM within clusters whose members are willing to sacrifice for each other (are willing to make Kaldor-Hicks improvements, and have some common currency s.t. “K-H improvement” is well-defined; or, in another framing, have a meaningfully shared utility function) . Maybe that’s roughly the right notion to start with? But then it feels strange to me to not consider things commensurate across epistemic viewpoints, especially if those views are contained in a single person (though GM-ing across internal drives does seem plausible to me).
I’d love to see you (or someone else) explore this idea more, and share hot takes about how to pin down the questions you allude to in the AM-GM boundary section of this post: where to set this boundary, examples of where you personally would set it in different cases, and what desiderata we should have for boundary-setting eventually. (It feels plausible to me that having maximally large clusters is in some important sense the right thing to aim for).
Wow, I came here to say literally the same thing about commensurability: that perhaps AM is for what’s commensurable, and GM is for what’s incommensurable.
Though, one note is that to me it actually seems fine to consider different epistemic viewpoints as incommensurate. These might be like different islands of low K-complexity, that each get some nice traction on the world but in very different ways, and where the path between them goes through inaccessibly-high K-complexity territory.