Yeah. I guess AIs would need to protect humans from certain messages not only based on the content of the message, but also how it was generated (e.g. using AI or not) and for what purpose (e.g. for manipulation or not). And sometimes humans need to be protected even from ideas that they themselves come up with (e.g. delusions, or totalitarian ideologies).
In general, I think human life in a world with smarter-than-human AIs requires deliberate “habitat preservation”, which in turn requires AIs to make some judgment calls on what’s good or bad for humans.
Yeah. I guess AIs would need to protect humans from certain messages not only based on the content of the message, but also how it was generated (e.g. using AI or not) and for what purpose (e.g. for manipulation or not). And sometimes humans need to be protected even from ideas that they themselves come up with (e.g. delusions, or totalitarian ideologies).
In general, I think human life in a world with smarter-than-human AIs requires deliberate “habitat preservation”, which in turn requires AIs to make some judgment calls on what’s good or bad for humans.