And if we don’t think all AI’s goals will be locked, then we might get better predictions by assuming the proliferation of all sorts of diverse AGI’s and asking, Which ones will ultimately survive the most?, rather than assuming that human design/intention will win out and asking, Which AGI’s will we be most likely to design? I do think the latter question is important, but only up until the point when AGI’s are recursively self-modifying.
And if we don’t think all AI’s goals will be locked, then we might get better predictions by assuming the proliferation of all sorts of diverse AGI’s and asking, Which ones will ultimately survive the most?, rather than assuming that human design/intention will win out and asking, Which AGI’s will we be most likely to design? I do think the latter question is important, but only up until the point when AGI’s are recursively self-modifying.