I don’t agree with this. In my mind there’s a pretty clear line between good and evil in AI-related matters, it goes something like this:
If you don’t want anyone to have AI, you’re probably on the side of good.
If you want everyone equally to have AI, you may be also on the side of good. Though there’s a factual question how well this will work out.
But if you think that you and your band of good guys should have AI, but they and their band of bad guys shouldn’t—or at least, your band should get world domination first, because you’re good—then in my mind this crosses the line. It’s where bad things happen. And I don’t really make an exception if the “good guys” are MIRI, or OpenAI, or the US, or whichever group.
Arms races are bad things. First best by far is if nobody has the doomsday devices, but second best is if we attempt nonproliferation of doomsday devices.
As a parallel, we would have been still at risk in a world where DeepMind was working on building ASI but where Elon didn’t freak out and start a competitor (followed by another competitor), but not as much risk. That’s not because DeepMind are “the good guys”, it’s because of race dynamics.
If only one entity is building AI, that reduces the risk from race dynamics, but increases the risk that the entity will become world dictator. I think the former reduction in risk is smaller than the second risk. So to me first best is nobody has AI, second best is everyone has it, and the worst option is if one group monopolizes it.
I don’t agree with this. In my mind there’s a pretty clear line between good and evil in AI-related matters, it goes something like this:
If you don’t want anyone to have AI, you’re probably on the side of good.
If you want everyone equally to have AI, you may be also on the side of good. Though there’s a factual question how well this will work out.
But if you think that you and your band of good guys should have AI, but they and their band of bad guys shouldn’t—or at least, your band should get world domination first, because you’re good—then in my mind this crosses the line. It’s where bad things happen. And I don’t really make an exception if the “good guys” are MIRI, or OpenAI, or the US, or whichever group.
Arms races are bad things. First best by far is if nobody has the doomsday devices, but second best is if we attempt nonproliferation of doomsday devices.
As a parallel, we would have been still at risk in a world where DeepMind was working on building ASI but where Elon didn’t freak out and start a competitor (followed by another competitor), but not as much risk. That’s not because DeepMind are “the good guys”, it’s because of race dynamics.
If only one entity is building AI, that reduces the risk from race dynamics, but increases the risk that the entity will become world dictator. I think the former reduction in risk is smaller than the second risk. So to me first best is nobody has AI, second best is everyone has it, and the worst option is if one group monopolizes it.