Your list of assumptions is definitely not complete. An important one not in the list is:
An AGI (not necessarily every AGI, but some AGIs soon after there are any AGIs) will have the power to make very large changes to the world, including some that would be disastrous for us (e.g., taking us apart as raw material for something “better”, rewriting our brains to put us in a mental state the AGI prefers, redesigning the world’s economy in a way that makes us all starve to death, etc., etc., etc.)
I suppose you could integrate this with “we will not be able to effectively stop an unaligned AGI”, but I think there’s an important difference between ”… because it may not be listening to us” or ”… because it may not care what we want” and ”… because it is stronger than us and we won’t be able to turn it off or destroy it”. (It’s the combination of those things that would lead to disaster.)
For the avoidance of doubt, I think this assumption is reasonable, and it seems like there are a number of quite different ways by which something with brainpower comparable to ours but much faster or much smarter might gain enough power that it could do terrible things and we couldn’t stop it by force. But it is an assumption, and even if it’s a correct assumption the details might matter. (Imagine World A where the biggest near-term threat is an AGI that overwhelms us by being superhumanly persuasive and getting everyone to trust it, versus World B where the biggest near-term threat is an AGI that overwhelms us by figuring out currently-unknown laws of physics that give it powers we would consider magical. In World A we might want to work on raising awareness of that danger and designing modes of interaction with AGIs that reduce the risk of being persuaded of things it would be better for us not to be persuaded of. In World B that would all be wasted effort; we’d probably again want to do some awareness-raising and might need to work on containment protocols that minimize an AGI’s chance of doing things with very precisely defined effects on the physical world.)
Your list of assumptions is definitely not complete. An important one not in the list is:
An AGI (not necessarily every AGI, but some AGIs soon after there are any AGIs) will have the power to make very large changes to the world, including some that would be disastrous for us (e.g., taking us apart as raw material for something “better”, rewriting our brains to put us in a mental state the AGI prefers, redesigning the world’s economy in a way that makes us all starve to death, etc., etc., etc.)
I suppose you could integrate this with “we will not be able to effectively stop an unaligned AGI”, but I think there’s an important difference between ”… because it may not be listening to us” or ”… because it may not care what we want” and ”… because it is stronger than us and we won’t be able to turn it off or destroy it”. (It’s the combination of those things that would lead to disaster.)
For the avoidance of doubt, I think this assumption is reasonable, and it seems like there are a number of quite different ways by which something with brainpower comparable to ours but much faster or much smarter might gain enough power that it could do terrible things and we couldn’t stop it by force. But it is an assumption, and even if it’s a correct assumption the details might matter. (Imagine World A where the biggest near-term threat is an AGI that overwhelms us by being superhumanly persuasive and getting everyone to trust it, versus World B where the biggest near-term threat is an AGI that overwhelms us by figuring out currently-unknown laws of physics that give it powers we would consider magical. In World A we might want to work on raising awareness of that danger and designing modes of interaction with AGIs that reduce the risk of being persuaded of things it would be better for us not to be persuaded of. In World B that would all be wasted effort; we’d probably again want to do some awareness-raising and might need to work on containment protocols that minimize an AGI’s chance of doing things with very precisely defined effects on the physical world.)