I’m curious whether you read the longer piece about moral stigmatization that I linked to at EA Forum? It’s here, and it addresses several of your points.
I have a much more positive view about the effectiveness of moral stigmatization, which I think has been at the heart of almost every successful moral progress movement in history. The anti-slavery movement stigmatized slavery. The anti-vivisection movement stigmatized torturing animals for ‘experiments’. The women’s rights movement stigmatized misogyny. The gay rights movement stigmatized homophobia.
After the world wars, biological and chemical weapons were not just regulated, but morally stigmatized. The anti-landmine campaign stigmatized landmines.
Even in the case of nuclear weapons, the anti-nukes peace movement stigmatized the use and spread of nukes, and was important in nuclear non-proliferation, and IMHO played a role in the heroic individual decisions by Arkhipov and others not to use nukes when they could have.
Regulation and treaties aimed to reduce the development, spread, and use of Bad Thing X, without moral stigmatization of Bad Thing X, doesn’t usually work very well. Formal law and informal social norms must typically reinforce each other.
I see no prospect for effective, strongly enforced regulation of ASI development without moral stigmatization of ASI development. This is because, ultimately, ‘regulation’ relies on the coercive power of the state—which relies on agents of the state (e.g. police, military, SWAT teams, special ops teams) being willing to enforce regulations even against people with very strong incentives not to comply. And these agents of the state simply won’t be willing to use government force against ASI devs violating regulations unless these agents already believe that the regulations are righteous and morally compelling.
That’s a very good point, and these examples really changes my intuition from “I can’t see this being a good idea,” to “this might make sense, this might not, it’s complicated.” And my earlier disagreement mostly came from my intuition.
I still have disagreements, but just to clarify I now agree your idea deserves more attention that it’s getting.
My remaining disagreement is I think stigmatization only reaches the extreme level of “these people are literally evil and vile,” after the majority of people already agree.
In places in India where the majority of people are already vegetarians, and already feel that eating meat is wrong, the social punishment of meat eaters does seem to deter them.
But in places where most people don’t think eating meat is wrong, prematurely calling meat eaters evil may backfire. This is because you’ve created a “moral-duel” where you force outside observers to either think the meat-eater is the bad guy, or you’re the bad guy (or stupid guy). This “moral-duel” drains the moral standing of both sides.
If you’re near the endgame, and 90% of people already are vegetarians, then this moral-duel will first deplete the meat-eater’s moral standing, and may solidify vegetarianism.
But if you’re at the beginning, when only 1% of people support your movement. You desperately want to invest your support and credibility into further growing your support and credibility, rather than burning it in a moral-duel against the meat-eater majority the way militant vegans did.
Nurturing credibility is especially important for AI Notkilleveryoneism, where the main obstacle is a lack of credibility and “this sounds like science fiction.”
Finally, at least only go after the AI lab CEOs, as they have relatively less moral standing, compared to the rank and file researchers.
E.g. in this quicktake Mikhail Samin appealed to researchers as friends asking them to stop “deferring” to their CEO.
Even for nuclear weapons, biological weapons, chemical weapons, landmines, it was hard to punish scientists researching it. Even for the death penalty, it was hard to punish the firing squad soldiers. It’s easier to stick it to the leaders. In an influential book by early feminist Lady Constance Lytton, she repeatedly described the policemen (who fought the movement) and even prison guards as very good people and focused the blame on the leaders.
PS: I read your post, it was a fascinating read. I agree with the direction of it and I agree the factors you mention are significant, but it might not go quite as far as you describe?
Knight—thanks again for the constructive engagement.
I take your point that if a group is a tiny and obscure minority, and they’re calling the majority view ‘evil’, and trying to stigmatize their behavior, that can backfire.
However, the surveys and polls I’ve seen indicate that the majority of humans already have serious concerns about AI risks, and in some sense are already onboard with ‘AI Notkilleveryoneism’. Many people are under-informed or misinformed in various ways about AI, but convincing the majority of humanity that the AI industry is acting recklessly seems like it’s already pretty close to feasible—if not already accomplished.
I think the real problem here is raising public awareness about how many people are already on team ‘AI Notkilleveryoneism’ rather than team ‘AI accelerationist’. This is a ‘common knowledge’ problem from game theory—the majority needs to know that they’re in the majority, in order to do successful moral stigmatization of the minority (in this case, the AI developers).
Haha you’re right, in another comment I was saying
55% of Americans surveyed agree that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Only 12% disagree.
To be honest, I’m extremely confused. Somehow, AI Notkilleveryoneism… is both a tiny minority and a majority at the same time.
I think the real problem here is raising public awareness about how many people are already on team ‘AI Notkilleveryoneism’ rather than team ‘AI accelerationist’. This is a ‘common knowledge’ problem from game theory—the majority needs to know that they’re in the majority,
That makes sense, it seems to explain things. The median AI expert also has a 5% to 10% chance of extinction, which is huge.
I’m still not in favour of stigmatizing AI developers, especially right now. Whether AI Notkilleveryoneism is a real minority or an imagined minority, if it gets into a moral-duel with AI developers, it will lose status, and it will be harder for it to grow (by convincing people to agree with it, or by convincing people who privately agree to come out of the closet).
People tend to follow “the experts” instead of their very uncertain intuitions about whether something is dangerous. With global warming, the experts were climatologists. With cigarette toxicity, the experts were doctors. But with AI risk, you were saying that,
Thousands of people signed the 2023 CAIS statement on AI risk, including almost every leading AI scientist, AI company CEO, AI researcher, AI safety expert, etc.
It sounds like, the expertise people look to when deciding “whether AI risk is serious or sci-fi,” comes from leading AI scientists, and even AI company CEOs. Very unfortunately, we may depend on our good relations with them… :(
Hi Knight, thanks for the thoughtful reply.
I’m curious whether you read the longer piece about moral stigmatization that I linked to at EA Forum? It’s here, and it addresses several of your points.
I have a much more positive view about the effectiveness of moral stigmatization, which I think has been at the heart of almost every successful moral progress movement in history. The anti-slavery movement stigmatized slavery. The anti-vivisection movement stigmatized torturing animals for ‘experiments’. The women’s rights movement stigmatized misogyny. The gay rights movement stigmatized homophobia.
After the world wars, biological and chemical weapons were not just regulated, but morally stigmatized. The anti-landmine campaign stigmatized landmines.
Even in the case of nuclear weapons, the anti-nukes peace movement stigmatized the use and spread of nukes, and was important in nuclear non-proliferation, and IMHO played a role in the heroic individual decisions by Arkhipov and others not to use nukes when they could have.
Regulation and treaties aimed to reduce the development, spread, and use of Bad Thing X, without moral stigmatization of Bad Thing X, doesn’t usually work very well. Formal law and informal social norms must typically reinforce each other.
I see no prospect for effective, strongly enforced regulation of ASI development without moral stigmatization of ASI development. This is because, ultimately, ‘regulation’ relies on the coercive power of the state—which relies on agents of the state (e.g. police, military, SWAT teams, special ops teams) being willing to enforce regulations even against people with very strong incentives not to comply. And these agents of the state simply won’t be willing to use government force against ASI devs violating regulations unless these agents already believe that the regulations are righteous and morally compelling.
That’s a very good point, and these examples really changes my intuition from “I can’t see this being a good idea,” to “this might make sense, this might not, it’s complicated.” And my earlier disagreement mostly came from my intuition.
I still have disagreements, but just to clarify I now agree your idea deserves more attention that it’s getting.
My remaining disagreement is I think stigmatization only reaches the extreme level of “these people are literally evil and vile,” after the majority of people already agree.
In places in India where the majority of people are already vegetarians, and already feel that eating meat is wrong, the social punishment of meat eaters does seem to deter them.
But in places where most people don’t think eating meat is wrong, prematurely calling meat eaters evil may backfire. This is because you’ve created a “moral-duel” where you force outside observers to either think the meat-eater is the bad guy, or you’re the bad guy (or stupid guy). This “moral-duel” drains the moral standing of both sides.
If you’re near the endgame, and 90% of people already are vegetarians, then this moral-duel will first deplete the meat-eater’s moral standing, and may solidify vegetarianism.
But if you’re at the beginning, when only 1% of people support your movement. You desperately want to invest your support and credibility into further growing your support and credibility, rather than burning it in a moral-duel against the meat-eater majority the way militant vegans did.
Nurturing credibility is especially important for AI Notkilleveryoneism, where the main obstacle is a lack of credibility and “this sounds like science fiction.”
Finally, at least only go after the AI lab CEOs, as they have relatively less moral standing, compared to the rank and file researchers.
E.g. in this quicktake Mikhail Samin appealed to researchers as friends asking them to stop “deferring” to their CEO.
Even for nuclear weapons, biological weapons, chemical weapons, landmines, it was hard to punish scientists researching it. Even for the death penalty, it was hard to punish the firing squad soldiers. It’s easier to stick it to the leaders. In an influential book by early feminist Lady Constance Lytton, she repeatedly described the policemen (who fought the movement) and even prison guards as very good people and focused the blame on the leaders.
PS: I read your post, it was a fascinating read. I agree with the direction of it and I agree the factors you mention are significant, but it might not go quite as far as you describe?
Knight—thanks again for the constructive engagement.
I take your point that if a group is a tiny and obscure minority, and they’re calling the majority view ‘evil’, and trying to stigmatize their behavior, that can backfire.
However, the surveys and polls I’ve seen indicate that the majority of humans already have serious concerns about AI risks, and in some sense are already onboard with ‘AI Notkilleveryoneism’. Many people are under-informed or misinformed in various ways about AI, but convincing the majority of humanity that the AI industry is acting recklessly seems like it’s already pretty close to feasible—if not already accomplished.
I think the real problem here is raising public awareness about how many people are already on team ‘AI Notkilleveryoneism’ rather than team ‘AI accelerationist’. This is a ‘common knowledge’ problem from game theory—the majority needs to know that they’re in the majority, in order to do successful moral stigmatization of the minority (in this case, the AI developers).
Haha you’re right, in another comment I was saying
To be honest, I’m extremely confused. Somehow, AI Notkilleveryoneism… is both a tiny minority and a majority at the same time.
That makes sense, it seems to explain things. The median AI expert also has a 5% to 10% chance of extinction, which is huge.
I’m still not in favour of stigmatizing AI developers, especially right now. Whether AI Notkilleveryoneism is a real minority or an imagined minority, if it gets into a moral-duel with AI developers, it will lose status, and it will be harder for it to grow (by convincing people to agree with it, or by convincing people who privately agree to come out of the closet).
People tend to follow “the experts” instead of their very uncertain intuitions about whether something is dangerous. With global warming, the experts were climatologists. With cigarette toxicity, the experts were doctors. But with AI risk, you were saying that,
It sounds like, the expertise people look to when deciding “whether AI risk is serious or sci-fi,” comes from leading AI scientists, and even AI company CEOs. Very unfortunately, we may depend on our good relations with them… :(