Different people have different values-on-reflection, and there should be autonomy to how those values develop, with a person screening off all external influence on how their own values develop. Any influences should only get let in purposefully, from within the values-defining process of reflection, rather than being imposed externally (the way any algorithm screens off the rest of reality from the process of its evaluation). So in a post-ASI world, there is already motivation for managing autonomous worlds/theories/modalities just for the purpose of uplifting people in a legitimate way, ultimately following from the person’s own decisions, rather than by imposition of anything about their developing values externally that doesn’t pass through their informed decisions to endorse such influences.
(There needs to be some additional structure, compared to just letting people run wild within astronomical resources, the way you wouldn’t give a galaxy to a literal chimp. There needs to be an initial uplifting process that gets the chimp into a position of knowing what it’s doing, at which point what it should be doing is up to it, and becomes an objective fact following from its own nature. Similarly, some humans have self-destructive tendencies such as spontaneously rushing to develop powerful AIs before they know what they are doing, and managing such issues is not unlike what it takes to uplift a chimp.)
So an obvious extrapolation is that people don’t need to all live in a single world, at a deep and fundamental level. For the abstract worlds they shape and inhabit, there should be principles that manage the influence between these worlds, according to what each abstract world prefers to listen to, to get influenced by (such as communication of knowledge, of solutions to puzzles, talking to other people or initiating shared projects). The world should have parts, and parts get to decide the nature of their own boundaries from the inside, which in a post-ASI world could involve arbitrary conceptual injunctions.
Have you already read Lady Of Mazes? There is a world (a constructed one, in orbit around Jupiter) that works this way on a small human level as the opening scene for Act I Scene I. The whole book explores this, and related, ideas.
I have an objection and a comment. First of all, IMO instead of talking about “give a galaxy to a literal chimp” we should think about “creating chimp-suited environments in all stellar systems[1] of the galaxy”. In this case no individual chimp would be able to lay waste to an entire galaxy.
In addition, I have an idea about potential terminal values which could be common for most people. Recall that the AI-2027 forecast had bigger and bigger virtual corporations of copies of Agent-3, then Agent-4 learn to succeed at diverse challenging tasks, usually related to research. The values that Agent-4 was assumed to develop are a complicated kludgy mess that roughly amounts to “do impressive R&D, gain knowledge and resources, preserve and grow the power of the collective” and, potentially, moral reasoning and idiosyncratic values.
One can try to consider the evolution of humans and human cultures through a similar lens. The drives reinforced in collectives would be to avoid being overly aggressive, train individuals to succeed at tasks like doing useful work in large communities, replenish the number of humans or increase[2] it, align the new humans to the collective, gain knowledge and resources.
The drives reinforced in individuals also add zero-sum values like status and the non-zero-sum value which is the potential not to inherit the status, but to increase it via some kind of status games. This might imply that mankind’s values are knowledge, capabilities, alignment, the potential to increase one’s status via status games (preferably related to capabilities), diversity of experiences and idiosyncratic values depending on the collective and individuals.
Different people have different values-on-reflection, and there should be autonomy to how those values develop, with a person screening off all external influence on how their own values develop. Any influences should only get let in purposefully, from within the values-defining process of reflection, rather than being imposed externally (the way any algorithm screens off the rest of reality from the process of its evaluation). So in a post-ASI world, there is already motivation for managing autonomous worlds/theories/modalities just for the purpose of uplifting people in a legitimate way, ultimately following from the person’s own decisions, rather than by imposition of anything about their developing values externally that doesn’t pass through their informed decisions to endorse such influences.
(There needs to be some additional structure, compared to just letting people run wild within astronomical resources, the way you wouldn’t give a galaxy to a literal chimp. There needs to be an initial uplifting process that gets the chimp into a position of knowing what it’s doing, at which point what it should be doing is up to it, and becomes an objective fact following from its own nature. Similarly, some humans have self-destructive tendencies such as spontaneously rushing to develop powerful AIs before they know what they are doing, and managing such issues is not unlike what it takes to uplift a chimp.)
So an obvious extrapolation is that people don’t need to all live in a single world, at a deep and fundamental level. For the abstract worlds they shape and inhabit, there should be principles that manage the influence between these worlds, according to what each abstract world prefers to listen to, to get influenced by (such as communication of knowledge, of solutions to puzzles, talking to other people or initiating shared projects). The world should have parts, and parts get to decide the nature of their own boundaries from the inside, which in a post-ASI world could involve arbitrary conceptual injunctions.
Have you already read Lady Of Mazes? There is a world (a constructed one, in orbit around Jupiter) that works this way on a small human level as the opening scene for Act I Scene I. The whole book explores this, and related, ideas.
I have an objection and a comment. First of all, IMO instead of talking about “give a galaxy to a literal chimp” we should think about “creating chimp-suited environments in all stellar systems[1] of the galaxy”. In this case no individual chimp would be able to lay waste to an entire galaxy.
In addition, I have an idea about potential terminal values which could be common for most people. Recall that the AI-2027 forecast had bigger and bigger virtual corporations of copies of Agent-3, then Agent-4 learn to succeed at diverse challenging tasks, usually related to research. The values that Agent-4 was assumed to develop are a complicated kludgy mess that roughly amounts to “do impressive R&D, gain knowledge and resources, preserve and grow the power of the collective” and, potentially, moral reasoning and idiosyncratic values.
One can try to consider the evolution of humans and human cultures through a similar lens. The drives reinforced in collectives would be to avoid being overly aggressive, train individuals to succeed at tasks like doing useful work in large communities, replenish the number of humans or increase[2] it, align the new humans to the collective, gain knowledge and resources.
The drives reinforced in individuals also add zero-sum values like status and the non-zero-sum value which is the potential not to inherit the status, but to increase it via some kind of status games. This might imply that mankind’s values are knowledge, capabilities, alignment, the potential to increase one’s status via status games (preferably related to capabilities), diversity of experiences and idiosyncratic values depending on the collective and individuals.
Or the few systems to which mankind is entitled, since there might be far more other potential cradles of alien civilisations.
For example, the Bible literally has God tell the humans to be fruitful and multiply.