Well, perhaps because the other options are even worse. For example, perhaps the world has gotten itself into a very sticky situation (e.g. a crazy arms race towards superintelligence that’s on the brink of escalating into WW3) and you think your best bet is to put AIs in charge of a bunch of things (e.g. AI research, diplomatic and military strategy, …) and hope they can handle things better than you. After all, they are more capable than you.
Another is that, in the long term, you can’t trust humans to make responsible decisions about weapons of mass destruction. As the level of firepower under our control constantly increases, we’re both less able to appreciate the magnitude of the damage it would cause and less able to recover from its use. It already seems likely that if we ran the tape of human civilization another few hundred years (modulo superintelligence) that the cumulative risk of an engineered pandemic, nuclear war, or some other superweapon would annihilate us.
GachaWrong
Brought to you by Andreessen Horowitz