In that case I’d repeat GeneSmith’s point from another comment: “I think we have a huge advantage with humans simply because there isn’t the same potential for runaway self-improvement.” If we have a whole bunch of super smart humans of roughly the same level who are aware of the problem, I don’t expect the ruthless ones to get a big advantage.
I mean I guess there is some sort of general concern here about how defense-offense imbalance changes as the population gets smarter. Like if there’s some easy way to destroy the world that becomes accessible with IQ > X, and we make a bunch of people with IQ > X, and a small fraction of them want to destroy the world for some reason, are the rest able to prevent it? This is sort of already the situation we’re in with AI: we look to be above the threshold of “ability to summon ASI”, but not above the threshold of “ability to steer the outcome”. In the case of AI, I expect making people smarter differentially speeds up alignment over capabilities: alignment is hard and we don’t know how to do it, while hill-climbing on capabilities is relatively easy and we already know how to do it.
I should also note that we have the option of concentrating early adoption among nice, sane, x-risk aware people (though I also find this kind of cringe in a way and predict this would be an unpopular move). I expect this to happen by default to some extent.
In that case I’d repeat GeneSmith’s point from another comment: “I think we have a huge advantage with humans simply because there isn’t the same potential for runaway self-improvement.” If we have a whole bunch of super smart humans of roughly the same level who are aware of the problem, I don’t expect the ruthless ones to get a big advantage.
I mean I guess there is some sort of general concern here about how defense-offense imbalance changes as the population gets smarter. Like if there’s some easy way to destroy the world that becomes accessible with IQ > X, and we make a bunch of people with IQ > X, and a small fraction of them want to destroy the world for some reason, are the rest able to prevent it? This is sort of already the situation we’re in with AI: we look to be above the threshold of “ability to summon ASI”, but not above the threshold of “ability to steer the outcome”. In the case of AI, I expect making people smarter differentially speeds up alignment over capabilities: alignment is hard and we don’t know how to do it, while hill-climbing on capabilities is relatively easy and we already know how to do it.
I should also note that we have the option of concentrating early adoption among nice, sane, x-risk aware people (though I also find this kind of cringe in a way and predict this would be an unpopular move). I expect this to happen by default to some extent.