I guess there’s some meta-level question here that I’m interested in, as a sort of elaboration, which is something like: how do you go about balancing which meta-levels of the world to satisfy and which to destroy? [I kind of have a sense that Eliezer’s answer can be guessed as an extension of the meta-ethics sequence, and so am interested both in his actual answer and other people’s answers.]
For example, one might imagine a mostly-upload situation like The Metamorphosis of Prime Intellect / Friendship is Optimal / Second Life / etc., wherein everyone gets a materially abundant digital life in their shard of the metaverse, with communication heavily constrained (if nothing else, by requiring mutual consent). This, of course, discards as no-longer-relevant entities that exist on higher meta-levels; nations will be mostly irrelevant in such a world, companies will mostly stop existing, and so on.
But one could also apply the same logic a level lower. If you take Internal Family Systems / mental modules seriously, humans don’t look like atomic objects, they look like a collection of simpler subagents balanced together in a sort of precarious way. (One part of you wants to accumulate lots of fat to survive the winter, another part of you wants to not accumulate lots of fat to look attractive to mates, the thing the ‘human’ is doing is balancing between those parts.) And so you can imagine a superintelligent system out to do right by the mental modules ‘splitting them apart’ in order to satisfy them separately, with one part swimming in a vat of glucose and the other inhabiting a beautiful statue, and discarding the ‘balancing between the parts’ system as no-longer-relevant.
Of course, applying this logic a level higher—the things to preserve are communities/nations/corporations/etc.--seems like it can quite easily be terrible for the people involved, and feels like it’s preserving problems in order to maintain the relevance of traditional solutions.
I guess there’s some meta-level question here that I’m interested in, as a sort of elaboration, which is something like: how do you go about balancing which meta-levels of the world to satisfy and which to destroy? [I kind of have a sense that Eliezer’s answer can be guessed as an extension of the meta-ethics sequence, and so am interested both in his actual answer and other people’s answers.]
For example, one might imagine a mostly-upload situation like The Metamorphosis of Prime Intellect / Friendship is Optimal / Second Life / etc., wherein everyone gets a materially abundant digital life in their shard of the metaverse, with communication heavily constrained (if nothing else, by requiring mutual consent). This, of course, discards as no-longer-relevant entities that exist on higher meta-levels; nations will be mostly irrelevant in such a world, companies will mostly stop existing, and so on.
But one could also apply the same logic a level lower. If you take Internal Family Systems / mental modules seriously, humans don’t look like atomic objects, they look like a collection of simpler subagents balanced together in a sort of precarious way. (One part of you wants to accumulate lots of fat to survive the winter, another part of you wants to not accumulate lots of fat to look attractive to mates, the thing the ‘human’ is doing is balancing between those parts.) And so you can imagine a superintelligent system out to do right by the mental modules ‘splitting them apart’ in order to satisfy them separately, with one part swimming in a vat of glucose and the other inhabiting a beautiful statue, and discarding the ‘balancing between the parts’ system as no-longer-relevant.
Of course, applying this logic a level higher—the things to preserve are communities/nations/corporations/etc.--seems like it can quite easily be terrible for the people involved, and feels like it’s preserving problems in order to maintain the relevance of traditional solutions.