Are people intrinsically drawn to having the eye fooled by abstracted drawings?
If you want to apply this to alignment, the next question would be, is there something in the human nature that causes this, or would an AGI likely be drawn to similar effects?
If you gather knowledge about what’s intrinsically motivating AGI’s as well, that would be valuable for alignment research, because it’s about creating motivations for AGI’s to do things.
Did these sinister implications go over the heads of audiences, or were they key to the masses’ enjoyment of the songs?
You can reframe that question as, “Is this aspect of the Beatles songs aligned with the desires of the audience?”
Both of your examples about what people or agents in general value. AI alignment is about how to align what humans and AGIs value. Understanding something about the nature of value, seems applicable to AI alignment.
If you want to apply this to alignment, the next question would be, is there something in the human nature that causes this, or would an AGI likely be drawn to similar effects?
If you gather knowledge about what’s intrinsically motivating AGI’s as well, that would be valuable for alignment research, because it’s about creating motivations for AGI’s to do things.
You can reframe that question as, “Is this aspect of the Beatles songs aligned with the desires of the audience?”
Both of your examples about what people or agents in general value. AI alignment is about how to align what humans and AGIs value. Understanding something about the nature of value, seems applicable to AI alignment.