I’ve always disliked the idea of people who assess risk differently being called “evil.” There’s a big difference between “I want to kill everyone, so I’m creating technology X” and “I disagree with the risk assessment of this technology and want to make the world a better place, so I’m creating it.”
I think the lesswrong community has become too radicalized in considering itself so epistemologically right that it calls people evil who simply disagree with them about the probability of p(doom).
UPD: The problem with this radicalization of the community is also that it repels many AI researchers who do not consider themselves evil and are less inclined to change their point of view on the risks posed by people who consider them evil and are probably not averse to killing them for disagreeing with them.
because you might be wrong, or people might genuinely not understand that you’re right? There are many people who believe that their lives are in danger of a lot of things: vaccines against covid, GMOs, climate change, etc. They also sincerely believe that they are right and can consider scientists evil because they endanger their lives and the lives of their family. But the problem is that if they start equating any people who disagree with them with the evil that deliberately wants to destroy them, it radicalizes them and promoting enmity. Is this a rational approach when every person, confident that technology N can kill him, starts openly hating the scientists who develop it?