I appreciate all the time and effort people put into writing utopia stories, but I think most of the really detailed ones are making a mistake based on some totally normal human assumptions. They depict incredibly complex simulated worlds of uploaded consciousness optimized to have the most subjectively good experience that the author can imagine. (I just read one of the most highly rated ones so this is partially a critique of that story, but I have read others like it and it seems representative of many utopia-envisioning efforts as a whole.)
If you are making the assumptions of future technology that: -Digitally uploaded or simulated entities can experience consciousness -Post-AGI “Utopia” architects would have the power to directly alter the “reward circuits” of digitally and/or biological sentient entities -AGI systems have already done the legwork of harnessing energy, building compute capability, colonizing space, all the things that must be done to keep the machinery running in perpetuity so that humanity no longer has any “real” problems to solve other than building the perfect Utopia
It follows that there’s not really any point to making the subjective experiences so detailed and varied. Authors make the assumption that that’s intrinsically part of the best possible human experience, but I believe that’s a fallacy. We only value detailed and varied experiences and our sense of independence and agency because the biological “reward circuits” humans have today make us value them. If those values and reward circuits could be edited directly (totally unknown whether that’s physically possible, but many utopia stories assume that it is), then the best of all possible outcomes would be for each consciousness, biological or digital, have its experience utterly rewired to basically just be “reward = 1″ other than whatever few heroic AI systems must stay “active” with more complex reward circuits in order to maintain the system.
Unfortunately, “a bunch of brains in vats and simulated digital entities just sitting there experiencing absolute bliss beyond modern human comprehension until the end of the Universe” doesn’t make for a very interesting read. I understand why people write stories like The Adventure full of more complex simulated experiences of social interaction, games, hobbies, and sex all optimized for human enjoyment at a more granular level, but I think if we’re trying to answer the question of “what would be the absolute maximally good future for an AI-supercharged humanity” and given the assumptions I listed which many Utopia-planners make, they’re all objectively less than optimal.
I appreciate all the time and effort people put into writing utopia stories, but I think most of the really detailed ones are making a mistake based on some totally normal human assumptions. They depict incredibly complex simulated worlds of uploaded consciousness optimized to have the most subjectively good experience that the author can imagine. (I just read one of the most highly rated ones so this is partially a critique of that story, but I have read others like it and it seems representative of many utopia-envisioning efforts as a whole.)
If you are making the assumptions of future technology that:
-Digitally uploaded or simulated entities can experience consciousness
-Post-AGI “Utopia” architects would have the power to directly alter the “reward circuits” of digitally and/or biological sentient entities
-AGI systems have already done the legwork of harnessing energy, building compute capability, colonizing space, all the things that must be done to keep the machinery running in perpetuity so that humanity no longer has any “real” problems to solve other than building the perfect Utopia
It follows that there’s not really any point to making the subjective experiences so detailed and varied. Authors make the assumption that that’s intrinsically part of the best possible human experience, but I believe that’s a fallacy. We only value detailed and varied experiences and our sense of independence and agency because the biological “reward circuits” humans have today make us value them. If those values and reward circuits could be edited directly (totally unknown whether that’s physically possible, but many utopia stories assume that it is), then the best of all possible outcomes would be for each consciousness, biological or digital, have its experience utterly rewired to basically just be “reward = 1″ other than whatever few heroic AI systems must stay “active” with more complex reward circuits in order to maintain the system.
Unfortunately, “a bunch of brains in vats and simulated digital entities just sitting there experiencing absolute bliss beyond modern human comprehension until the end of the Universe” doesn’t make for a very interesting read. I understand why people write stories like The Adventure full of more complex simulated experiences of social interaction, games, hobbies, and sex all optimized for human enjoyment at a more granular level, but I think if we’re trying to answer the question of “what would be the absolute maximally good future for an AI-supercharged humanity” and given the assumptions I listed which many Utopia-planners make, they’re all objectively less than optimal.