Pretty sure. Not completely, but it does seem pretty fundamental. You cannot hard code the operation of the universe into an AI, and that means it has to be able to look at symbols going into the universe and symbols coming out, and say ‘okay, what sort of underlying system would produce this behavior’? You can apply the same sort of thing to humans. If it can’t model humans effectively, we can probably kill it.
Pretty sure. Not completely, but it does seem pretty fundamental. You cannot hard code the operation of the universe into an AI, and that means it has to be able to look at symbols going into the universe and symbols coming out, and say ‘okay, what sort of underlying system would produce this behavior’? You can apply the same sort of thing to humans. If it can’t model humans effectively, we can probably kill it.