Defense technologies should be more of the “armor the sheep” flavor, less of the “hunt down all the wolves” flavor. Discussions about the vulnerable world hypothesis often assume that the only solution is a hegemon maintaining universal surveillance to prevent any potential threats from emerging. But in a non-hegemonic world, this is not a workable approach (see also: security dilemma), and indeed top-down mechanisms of defense could easily be subverted by a powerful AI and turned into its offense. Hence, a larger share of the defense instead needs to happen by doing the hard work to make the world less vulnerable.
This might be the only item on this list that I disagree with.
I agree that given a choice between armoring the sheep and hunting down the wolves, we should prefer armoring the sheep. But sometimes we simply don’t have a choice. E.g. our solution to murder is to hunt down murderers, not to give everyone body armor and so forth so that they can’t be killed, because that simply wouldn’t be feasible. (It would indeed be a better world if we didn’t need police because violent crimes simply weren’t possible because everything was so well defended)
I think we should take these things on a case by case basis.
And furthermore, I think that superintelligence is an example of the sort of thing where the best strategy is to ensure that the most powerful AIs, at any given time, are aligned/virtuous/etc. It’s maybe OK if less-powerful ones are misaligned, but it’s very much not OK if the world’s most powerful AIs are misaligned.
This might be the only item on this list that I disagree with.
I agree that given a choice between armoring the sheep and hunting down the wolves, we should prefer armoring the sheep. But sometimes we simply don’t have a choice. E.g. our solution to murder is to hunt down murderers, not to give everyone body armor and so forth so that they can’t be killed, because that simply wouldn’t be feasible. (It would indeed be a better world if we didn’t need police because violent crimes simply weren’t possible because everything was so well defended)
I think we should take these things on a case by case basis.
And furthermore, I think that superintelligence is an example of the sort of thing where the best strategy is to ensure that the most powerful AIs, at any given time, are aligned/virtuous/etc. It’s maybe OK if less-powerful ones are misaligned, but it’s very much not OK if the world’s most powerful AIs are misaligned.