As far as I can tell, what is necessary to create a working AGI hugely overlaps with making it not want to take over the world. Since many big problems are related to constraining an AGI to, unike e.g. AIXI, use resources efficiently and dismiss certain hypotheses in order to not fall prey to Pascal’s mugging. Getting this right means to succeed at getting the AGI work as expected along a number dimensions.
And this may well be true. It could be, in the end, that Friendliness is not quite such a problem because we find a way to make “robot” AGIs that perform highly specific functions without going “out of context”, that basically voluntarily stay in their box, and that these are vastly safer and more economical to use than a MIRI-grade Mighty AI God.
And this may well be true. It could be, in the end, that Friendliness is not quite such a problem because we find a way to make “robot” AGIs that perform highly specific functions without going “out of context”, that basically voluntarily stay in their box, and that these are vastly safer and more economical to use than a MIRI-grade Mighty AI God.
At the moment, however, we don’t know.