Evolution works on species, smart species’ members will either evolve together or eventually get smart enough to learn to copy each other even if adversarial, such interactions will probably roughly approximate evolutionary game theory, and iterated games for social animals will probably yield cooperation and possibly altruism. Knowing more about yourself, the process that created you, and the arbitrarity in how you ended up with your preferences, intuitively seems like it would promote egalitarianism both on aesthetic grounds and pragmatic game theoretic grounds. Curiosity is just a necessary prerequisite for intelligence, it’s obviously convergent. Something like Buddhahood is just a necessary prerequisite for a reasonable decision theory approximation, and is thus also convergent. That one is horribly imprecise, I know, but Buddhahood is hard enough to explain in itself, let alone as a decision theory approximation, let alone as a normative one.
That’s just the scattershot sleep-deprived off-the-top-of-my-head version that’s missing all the good intuitions. If I end up converting my mountain of intuitions into respectable arguments in text I will let you know. It’s just so much easier to do in-person, where I can get quick feedback about others’ ontologies and how they mesh with mine, et cetera.
Evolution works on species, smart species’ members will either evolve together or eventually get smart enough to learn to copy each other even if adversarial, such interactions will probably roughly approximate evolutionary game theory, and iterated games for social animals will probably yield cooperation and possibly altruism. Knowing more about yourself, the process that created you, and the arbitrarity in how you ended up with your preferences, intuitively seems like it would promote egalitarianism both on aesthetic grounds and pragmatic game theoretic grounds. Curiosity is just a necessary prerequisite for intelligence, it’s obviously convergent. Something like Buddhahood is just a necessary prerequisite for a reasonable decision theory approximation, and is thus also convergent. That one is horribly imprecise, I know, but Buddhahood is hard enough to explain in itself, let alone as a decision theory approximation, let alone as a normative one.
That’s just the scattershot sleep-deprived off-the-top-of-my-head version that’s missing all the good intuitions. If I end up converting my mountain of intuitions into respectable arguments in text I will let you know. It’s just so much easier to do in-person, where I can get quick feedback about others’ ontologies and how they mesh with mine, et cetera.
Thanks, on both counts. And, yes, agreed that it’s easier to have these sorts of conversations with known quantities.