Another potential complication is hard to get at philosophically, but it could be described as “the AIs will have something analogous to free will”. Specifically, they will likely have a process where the AI can learn from experience, and resolve conflicts between incompatible values and goals it already holds.
If this is the case, then it’s entirely possible that the AI’s goals will adjust over time, in response to new information, or even just thanks to “contemplation” and strategizing. (AIs that can’t adjust to changing circumstances and draw their own conclusions are unlikely to compete with other AIs that can.)
But if the AI’s values and goals can be updated, then ensuring even vague alignment gets even harder.
Another potential complication is hard to get at philosophically, but it could be described as “the AIs will have something analogous to free will”. Specifically, they will likely have a process where the AI can learn from experience, and resolve conflicts between incompatible values and goals it already holds.
If this is the case, then it’s entirely possible that the AI’s goals will adjust over time, in response to new information, or even just thanks to “contemplation” and strategizing. (AIs that can’t adjust to changing circumstances and draw their own conclusions are unlikely to compete with other AIs that can.)
But if the AI’s values and goals can be updated, then ensuring even vague alignment gets even harder.