I think it’s often pretty subjective whether some piece of external stimulus is super-optimized manipulation that will take you further away from what you want to believe, or part of the natural and good process of cultural change and increased reflection.
I agree with you that the distinction is clearer for the hyper-optimized persuasion.
Yeah. I guess AIs would need to protect humans from certain messages not only based on the content of the message, but also how it was generated (e.g. using AI or not) and for what purpose (e.g. for manipulation or not). And sometimes humans need to be protected even from ideas that they themselves come up with (e.g. delusions, or totalitarian ideologies).
In general, I think human life in a world with smarter-than-human AIs requires deliberate “habitat preservation”, which in turn requires AIs to make some judgment calls on what’s good or bad for humans.
I think it’s often pretty subjective whether some piece of external stimulus is super-optimized manipulation that will take you further away from what you want to believe, or part of the natural and good process of cultural change and increased reflection.
I agree with you that the distinction is clearer for the hyper-optimized persuasion.
Yeah. I guess AIs would need to protect humans from certain messages not only based on the content of the message, but also how it was generated (e.g. using AI or not) and for what purpose (e.g. for manipulation or not). And sometimes humans need to be protected even from ideas that they themselves come up with (e.g. delusions, or totalitarian ideologies).
In general, I think human life in a world with smarter-than-human AIs requires deliberate “habitat preservation”, which in turn requires AIs to make some judgment calls on what’s good or bad for humans.