I think humans don’t have actual “respect for preferences of existing agents” in way that doesn’t pose existential risks for agents weaker than them.
Imagine planet of conscious paperclippers. They are pre-Singularity paperclippers, so they are not exactly coherent single-minded agents, they have a lot of shards of desire and if you take their children and apply effort in their upbringing, they won’t be single-minded paperclippers and they will have some sort of alien fun. But majority of establishment and conventional morality says that the best future outcome is to build superintelligent paperclip-maximizer and die, turning into paperclips. Yes, including children. Yes, they would strongly object if you try to divert them from this course. They won’t take buying a lot of paperclips somewhere else, just like humanity won’t take getting paperclipped in exchange of building Utopia somewhere else.
I actually don’t know position of future humanity regarding this hypothetical, but I predict that siginificant faction would be really unhappy and demand violent intervention.
Except that SOTA establishment and conventional morality are arguably either manageable or far from settled. There is an evolution-related case against the possibility that Paperclippers can arise naturally. What could be SOTA model organisms of alien cultures that we would call misaligned? The Aztec culture before being discovered? Or something more recent, like Kampuchea? And how does the chance that mankind deems a culture unworthy of continued existence depend on the culture’s capabilities, including those of self-reflection?
Humans will lose trust in media which they have now staked their daily existence on; news media and legal procedures will be altered by the influx of Sora-generated images and videos, raising the current level of national aggressions we see acted out now, and thanks to humans’ underdeveloped emotional responses and highly developed weaponry, and the propensity of particularly the US culture toward violence, AI will aid in the spread of death and destruction.
I think humans don’t have actual “respect for preferences of existing agents” in way that doesn’t pose existential risks for agents weaker than them.
Imagine planet of conscious paperclippers. They are pre-Singularity paperclippers, so they are not exactly coherent single-minded agents, they have a lot of shards of desire and if you take their children and apply effort in their upbringing, they won’t be single-minded paperclippers and they will have some sort of alien fun. But majority of establishment and conventional morality says that the best future outcome is to build superintelligent paperclip-maximizer and die, turning into paperclips. Yes, including children. Yes, they would strongly object if you try to divert them from this course. They won’t take buying a lot of paperclips somewhere else, just like humanity won’t take getting paperclipped in exchange of building Utopia somewhere else.
I actually don’t know position of future humanity regarding this hypothetical, but I predict that siginificant faction would be really unhappy and demand violent intervention.
Except that SOTA establishment and conventional morality are arguably either manageable or far from settled. There is an evolution-related case against the possibility that Paperclippers can arise naturally. What could be SOTA model organisms of alien cultures that we would call misaligned? The Aztec culture before being discovered? Or something more recent, like Kampuchea? And how does the chance that mankind deems a culture unworthy of continued existence depend on the culture’s capabilities, including those of self-reflection?
Humans will lose trust in media which they have now staked their daily existence on; news media and legal procedures will be altered by the influx of Sora-generated images and videos, raising the current level of national aggressions we see acted out now, and thanks to humans’ underdeveloped emotional responses and highly developed weaponry, and the propensity of particularly the US culture toward violence, AI will aid in the spread of death and destruction.