Sharing our technical concerns about these abstract risks isn’t enough. We also have to morally stigmatize
I’m with you up until here; this isn’t just a technical debate, it’s a moral and social and political conflict with high stakes, and good and bad actions.
the specific groups of people imposing these risks on all of us.
To be really nitpicky, I technically agree with this as stated: we should stigmatize groups as such, e.g. “the AGI capabilities research community” is evil.
We need the moral courage to label other people evil when they’re doing evil things.
Oops, this is partially but importantly WRONG. From Braχot 10a:
With regard to the statement of Rabbi Yehuda, son of Rabbi Shimon ben Pazi, that David did not say Halleluya until he saw the downfall of the wicked, the Gemara relates: There were these hooligans in Rabbi Meir’s neighborhood who caused him a great deal of anguish. Rabbi Meir prayed for God to have mercy on them, that they should die. Rabbi Meir’s wife, Berurya, said to him: What is your thinking? On what basis do you pray for the death of these hooligans? Do you base yourself on the verse, as it is written: “Let sins cease from the land” (Psalms 104:35), which you interpret to mean that the world would be better if the wicked were destroyed? But is it written, let sinners cease?” Let sins cease, is written. One should pray for an end to their transgressions, not for the demise of the transgressors themselves.
Not everyone who is doing evil things is evil. Some people are evil. You should hate no more than necessary, but not less than that. You should hate evil, and hate evildoers if necessary, but not if not necessary.
Schmidhuber? Evil. Sutton? Evil. Larry Page? Evil. If, after reflection, you endorse omnicide, you’re evil. Altman? Evil and probably a sociopath.
Up-and-coming research star at an AI lab? Might be evil, might not be. Doing something evil? Yes. Is evil? Maybe, it depends.
Essentializing someone by calling them evil is an escalation of a conflict. You’re closing off lines of communication and gradual change. You’re polarizing things: it’s harder for that one person to make gradual moves in belief space and social space and life-narrative space, and it’s harder for groups to have group negotiations. Sometimes escalation is good and difficult and necessary, but sometimes escalation is really bad! Doing a more complicated subtle thing with more complicated boundaries is more difficult. And more brave, if we’re debating bravery here.
So:
Good:
The work you’re doing is evil.
Good:
The goal of this company is among the evilest possible goals ever.
Good:
If you ignore the world-class experts saying that your work might kill everyone, then you are being a disgusting creep and will be responsible for killing everyone.
Bad:
You’re a creep / evil / bad person.
Sidenote:
Because too many rationalists, EAs, tech enthusiasts, LessWrong people, etc still see those AI guys as ‘in our tribe’, based on sharing certain traits we hold dear
I agree that this is an improper motivation for treating some actions with kid gloves, which will lead to incorrect action; and that this is some of what’s actually happening.
The apparent aim of OpenAI (making AGI, even though we don’t know how to do so without killing everyone) is evil.