Yes, it takes courage to call people out as evil, because you might be wrong, you might unjustly ruin their lives, you might have mistakenly turned them into scapegoat, etc. Moral stigmatization carries these risks. Always has.
And people understand this. Which is why, if we’re not willing to call the AGI industry leaders and devs evil, then people will see us failing to have the courage of our convictions. They will rightly see that we’re not actually confident enough in our judgments about AI X-risk to take the bold step of pointing fingers and saying ‘WRONG!’.
So, we can hedge our social bets, and try to play nice with the AGI industry, and worry about making such mistakes. Or, we can save humanity.
To be clear, I think it would probably be reasonable for some external body like the UN to attempt to prosecute & imprison ~everyone working at big AI companies for their role in racing to build doomsday machines. (Most people in prison are not evil.) I’m a bit unsure if it makes sense to do things like this retroactively rather than to just outlaw it going forward, but I think it sometime makes sense to prosecute atrocities after the fact even if there wasn’t a law against it at the time. For instance my understanding is that the Nuremberg trials set precedents for prosecuting people for war crimes, crimes against humanity, and crimes against peace, even though legally they weren’t crimes at the time that they happened.
I just have genuine uncertainty about the character of many of the people in the big AI companies and I don’t believe they’re all fundamentally rotten people! And I think language is something that can easily get bent out of shape when the stakes are high, and I don’t want to lose my ability to speak and be understood. Consequently I find I care about not falsely calling people’s character/nature evil when what I think is happening is that they are committing an atrocity, which is similar but distinct.
Yes, it takes courage to call people out as evil, because you might be wrong, you might unjustly ruin their lives, you might have mistakenly turned them into scapegoat, etc. Moral stigmatization carries these risks. Always has.
And people understand this. Which is why, if we’re not willing to call the AGI industry leaders and devs evil, then people will see us failing to have the courage of our convictions. They will rightly see that we’re not actually confident enough in our judgments about AI X-risk to take the bold step of pointing fingers and saying ‘WRONG!’.
So, we can hedge our social bets, and try to play nice with the AGI industry, and worry about making such mistakes. Or, we can save humanity.
To be clear, I think it would probably be reasonable for some external body like the UN to attempt to prosecute & imprison ~everyone working at big AI companies for their role in racing to build doomsday machines. (Most people in prison are not evil.) I’m a bit unsure if it makes sense to do things like this retroactively rather than to just outlaw it going forward, but I think it sometime makes sense to prosecute atrocities after the fact even if there wasn’t a law against it at the time. For instance my understanding is that the Nuremberg trials set precedents for prosecuting people for war crimes, crimes against humanity, and crimes against peace, even though legally they weren’t crimes at the time that they happened.
I just have genuine uncertainty about the character of many of the people in the big AI companies and I don’t believe they’re all fundamentally rotten people! And I think language is something that can easily get bent out of shape when the stakes are high, and I don’t want to lose my ability to speak and be understood. Consequently I find I care about not falsely calling people’s character/nature evil when what I think is happening is that they are committing an atrocity, which is similar but distinct.