The AI alignment problem seems to be a problem inherent to seeking out sufficiently complex goals.
I would add that the alignment problem can still occur for simple goals. In fact, I don’t think I can come up with a “goal” simple enough that I could specify it on an advanced artificial intelligence without mistake, even in principle. Of course, this might just be a limitation of my imagination.
The alignment problem really occurs whenever one agent can’t see all possible consequences of their actions. Given our extremely limited minds in this universe, the problem ends up popping up everywhere.
I agree. I used the modifier “sufficiently” in order to avoid making claims about where a hard line between complex goals and simple goals would lie. Should have made that clearer.
I would add that the alignment problem can still occur for simple goals. In fact, I don’t think I can come up with a “goal” simple enough that I could specify it on an advanced artificial intelligence without mistake, even in principle. Of course, this might just be a limitation of my imagination.
The alignment problem really occurs whenever one agent can’t see all possible consequences of their actions. Given our extremely limited minds in this universe, the problem ends up popping up everywhere.
I agree. I used the modifier “sufficiently” in order to avoid making claims about where a hard line between complex goals and simple goals would lie. Should have made that clearer.