Hmm. You’re describing a future where most humans are powerless, but keep being provided for. It seems to me that the most likely way to get such a future is if AIs (or human+AI organizations, or whatever) genuinely care about humans. But then they would also protect humans from super-optimized manipulation, no?
Or if that genuine care doesn’t exist, and UBI is provided as “scraps” so to speak, then the fate of humans is sealed anyway. As soon as the entities in power find something more interesting to do with the resources, they’ll cut welfare and that’s it. After all, the energy upkeep of a human could be used for a ton of computation instead.
I think it’s often pretty subjective whether some piece of external stimulus is super-optimized manipulation that will take you further away from what you want to believe, or part of the natural and good process of cultural change and increased reflection.
I agree with you that the distinction is clearer for the hyper-optimized persuasion.
Yeah. I guess AIs would need to protect humans from certain messages not only based on the content of the message, but also how it was generated (e.g. using AI or not) and for what purpose (e.g. for manipulation or not). And sometimes humans need to be protected even from ideas that they themselves come up with (e.g. delusions, or totalitarian ideologies).
In general, I think human life in a world with smarter-than-human AIs requires deliberate “habitat preservation”, which in turn requires AIs to make some judgment calls on what’s good or bad for humans.
Hmm. You’re describing a future where most humans are powerless, but keep being provided for. It seems to me that the most likely way to get such a future is if AIs (or human+AI organizations, or whatever) genuinely care about humans. But then they would also protect humans from super-optimized manipulation, no?
Or if that genuine care doesn’t exist, and UBI is provided as “scraps” so to speak, then the fate of humans is sealed anyway. As soon as the entities in power find something more interesting to do with the resources, they’ll cut welfare and that’s it. After all, the energy upkeep of a human could be used for a ton of computation instead.
I think it’s often pretty subjective whether some piece of external stimulus is super-optimized manipulation that will take you further away from what you want to believe, or part of the natural and good process of cultural change and increased reflection.
I agree with you that the distinction is clearer for the hyper-optimized persuasion.
Yeah. I guess AIs would need to protect humans from certain messages not only based on the content of the message, but also how it was generated (e.g. using AI or not) and for what purpose (e.g. for manipulation or not). And sometimes humans need to be protected even from ideas that they themselves come up with (e.g. delusions, or totalitarian ideologies).
In general, I think human life in a world with smarter-than-human AIs requires deliberate “habitat preservation”, which in turn requires AIs to make some judgment calls on what’s good or bad for humans.