Knight—thanks again for the constructive engagement.
I take your point that if a group is a tiny and obscure minority, and they’re calling the majority view ‘evil’, and trying to stigmatize their behavior, that can backfire.
However, the surveys and polls I’ve seen indicate that the majority of humans already have serious concerns about AI risks, and in some sense are already onboard with ‘AI Notkilleveryoneism’. Many people are under-informed or misinformed in various ways about AI, but convincing the majority of humanity that the AI industry is acting recklessly seems like it’s already pretty close to feasible—if not already accomplished.
I think the real problem here is raising public awareness about how many people are already on team ‘AI Notkilleveryoneism’ rather than team ‘AI accelerationist’. This is a ‘common knowledge’ problem from game theory—the majority needs to know that they’re in the majority, in order to do successful moral stigmatization of the minority (in this case, the AI developers).
Haha you’re right, in another comment I was saying
55% of Americans surveyed agree that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Only 12% disagree.
To be honest, I’m extremely confused. Somehow, AI Notkilleveryoneism… is both a tiny minority and a majority at the same time.
I think the real problem here is raising public awareness about how many people are already on team ‘AI Notkilleveryoneism’ rather than team ‘AI accelerationist’. This is a ‘common knowledge’ problem from game theory—the majority needs to know that they’re in the majority,
That makes sense, it seems to explain things. The median AI expert also has a 5% to 10% chance of extinction, which is huge.
I’m still not in favour of stigmatizing AI developers, especially right now. Whether AI Notkilleveryoneism is a real minority or an imagined minority, if it gets into a moral-duel with AI developers, it will lose status, and it will be harder for it to grow (by convincing people to agree with it, or by convincing people who privately agree to come out of the closet).
People tend to follow “the experts” instead of their very uncertain intuitions about whether something is dangerous. With global warming, the experts were climatologists. With cigarette toxicity, the experts were doctors. But with AI risk, you were saying that,
Thousands of people signed the 2023 CAIS statement on AI risk, including almost every leading AI scientist, AI company CEO, AI researcher, AI safety expert, etc.
It sounds like, the expertise people look to when deciding “whether AI risk is serious or sci-fi,” comes from leading AI scientists, and even AI company CEOs. Very unfortunately, we may depend on our good relations with them… :(
Knight—thanks again for the constructive engagement.
I take your point that if a group is a tiny and obscure minority, and they’re calling the majority view ‘evil’, and trying to stigmatize their behavior, that can backfire.
However, the surveys and polls I’ve seen indicate that the majority of humans already have serious concerns about AI risks, and in some sense are already onboard with ‘AI Notkilleveryoneism’. Many people are under-informed or misinformed in various ways about AI, but convincing the majority of humanity that the AI industry is acting recklessly seems like it’s already pretty close to feasible—if not already accomplished.
I think the real problem here is raising public awareness about how many people are already on team ‘AI Notkilleveryoneism’ rather than team ‘AI accelerationist’. This is a ‘common knowledge’ problem from game theory—the majority needs to know that they’re in the majority, in order to do successful moral stigmatization of the minority (in this case, the AI developers).
Haha you’re right, in another comment I was saying
To be honest, I’m extremely confused. Somehow, AI Notkilleveryoneism… is both a tiny minority and a majority at the same time.
That makes sense, it seems to explain things. The median AI expert also has a 5% to 10% chance of extinction, which is huge.
I’m still not in favour of stigmatizing AI developers, especially right now. Whether AI Notkilleveryoneism is a real minority or an imagined minority, if it gets into a moral-duel with AI developers, it will lose status, and it will be harder for it to grow (by convincing people to agree with it, or by convincing people who privately agree to come out of the closet).
People tend to follow “the experts” instead of their very uncertain intuitions about whether something is dangerous. With global warming, the experts were climatologists. With cigarette toxicity, the experts were doctors. But with AI risk, you were saying that,
It sounds like, the expertise people look to when deciding “whether AI risk is serious or sci-fi,” comes from leading AI scientists, and even AI company CEOs. Very unfortunately, we may depend on our good relations with them… :(